Search results

Friday, May 24, 2019 @ 02:05 PM gHale

Champion Modular Inc. is facing $687,650 in fines for exposing employees to safety and health hazards at its Strattanville, Pennsylvania, facility, said officials at the Occupational Safety and Health Administration (OSHA).

Investigators launched an inspection after an employee suffered an amputation in November.

RELATED STORIES
NJ Food Maker to Improve Safety at Facilities
Aircraft Maker Faces Health, Safety Fines
TX Meat Packing Plant Faces Safety Fines
IA Popcorn Maker Faces Safety Issues

In all, the company is facing 21 serious, eight willful, and three other-than-serious violations.

OSHA issued willful and serious citations for failing to use machine guarding, provide fall protection, and train workers on hazard communication and hearing conservation.

“Moving machine parts have the potential to cause severe workplace injuries if they are not safeguarded,” said OSHA Erie Area Office Director Brendan Claybaugh. “Employers’ use of machine guards and devices is not optional. Employers are legally responsible for ensuring that machine operators are protected.”

Champion Modular makes modular built in a controlled factory environment in sections, and then they end up transported to the construction site, according to the company.

Thursday, May 23, 2019 @ 05:05 PM gHale

Maplesoft issued its new release of MapleSim, the system-level modeling tool.

From digital twins for virtual commissioning to system-level models for complex engineering design projects, MapleSim helps organizations reduce development risk, lower costs, and enable innovation. The latest release provides improved performance, increased modeling scope, and more ways to connect to an existing toolchain.

Simulation is faster for everyone with MapleSim 2019 due to more efficient handling of constraints when preparing the model, resulting in more compact, faster simulation code without any loss of fidelity. These results mean MapleSim’s speeds have gotten even better, saving time and enabling more real-time applications. In addition, models developed in MapleSim and then exported for use in other tools also run faster in the target applications.

New built-in and add-on components and expanded support for external libraries means engineers can create more models, faster, in MapleSim 2019. The new release expands the scope of models that can be created using pre-existing components, with additions to hydraulics, electrical, multibody, and more. As well, the MapleSim Engine Dynamics Library from Modelon is a new add-on library that provides specialized tools for modeling, simulating, and analyzing the performance of combustion engines. This component library is especially useful for representing transient engine responses, and can be used for analyzing engine performance, performing emission studies, controls development, hardware-in-the-loop verification of vehicle electronic control units, and more.

In addition, MapleSim 2019 offers important advances in toolchain integration. Improvements include additional options for FMI connectivity, including support for variable-step solvers, as well as fixed-step, for running imported models in MapleSim and exporting models to other tools. In addition, the new B&R MapleSim Connector add-on gives automation projects a powerful, model-based ability to test and visualize control strategies from within B&R Automation Studio, and to export simulation data for motor, servo, and gearbox sizing within SERVOsoft.

MapleSim is available in English, Japanese, and French.

Wednesday, May 22, 2019 @ 05:05 PM gHale

By Gregory Hale
Increased connectivity means more data is coming into manufacturing facilities, and all that data is great and important, but operators need to know the quality of that data within the proper context.

“You can argue the industrial space is the new risk frontier,” said Leo Simonovich, vice president and global head of industrial and digital security at Siemens during Wednesday’s Spotlight on Innovation in Orlando, FL. “Our goal is to protect energy’s industrial infrastructure from increasingly sophisticated and malicious industrial threats.”

RELATED STORIES
Manufacturing Report: Financial Attacks on Rise
Siemens, TÜV SÜD Partner on Safety-Security
Security Spotlight: Triton Fallout, Securing Supply Chain
How Executives Think about Security

Along those lines Siemens partnered with Chronicle in an effort to protect the energy industry’s critical infrastructure from increasingly sophisticated and malicious industrial cyber threats.

“Operational insights and allow customers to act confidently against threats. Chronicle’s backstory will serve as the backbone to Siemens managed services providing a centralized analytical engine to aggregate OT data, identify associated patterns of behavior and conducting deep forensic analysis. The combo of chronical technology and siemens know how will not only allow customers to detect anomalies but give context and give them the confidence to take action.

Chronicle’s Backstory is a global security telemetry platform for investigation and threat hunting. It allows for increased visibility and puts data in proper context for end users to utilize.

“Over the last decade I have been trying to work on trying to detect and disrupt advanced persistent threats from nation state actors that can cause so much harm to traditional systems as well as industrial control systems and all of the type of hardware that exists in the world that we have come to rely on,” said Mike Wiacek, co-founder and chief security officer at Chronicle, which was born in Alphabet’s moonshot factory, and inspired by Google’s own security techniques. “Security analytics make up our DNA. At Google we were always trying to detect and deter attacks. We have to be as agile as the bad guys.”

Along the lines of being agile, Chronicle wants to bring more to the table.

“We looked at some of the systems we built for Google’s protection and we thought how can we take these and develop it for the world,” Wiacek said. “Backstory is a global security analytics platform designed to collect, integrate and store petabytes of data to allow analysts to analyze that over a significant piece of time.

We can utilize the platform in the industrial space to tackle the interconnected world between information technology and operational technology. At its core, Backstory provides us visibility and context. It is a tool that can provide in-depth forensic investigation and forensic analysis. We can look at behaviors where analysts can look back across time at different dimensions of data to identify and understand unusual activity that an attack is underway.

The Siemens-Chronicle partnership will help energy companies leverage the cloud to store and categorize data, while applying analytics, artificial intelligence, and machine learning to OT systems that can identify patterns, anomalies, and cyber threats.

Chronicle’s Backstory will be the backbone of Siemens managed service for industrial cyber monitoring, including in hybrid and cloud environments. This combined solution enables security across the industry’s operating environment – from energy exploration and extraction to power generation and delivery.

“The energy industry faces a fairly low level of maturity, most customers don’t know what is in their environment, and don’t know how to prioritize their risk and ultimately what to do about it. The core challenge today is visibility,” Simonovich said.

“To take advantage of digitalization we have to do security right. Today’s attacks like WannaCry, NotPetya, Triton and Norsk Hydro are leading to a breakdown of trust in the physical and digital worlds,” he said. “Customers are skittish to connect and take advantage of digitalization. We need to give customers transparency, help them understand what is happening and work together on a joint blueprint to take action.”

Tuesday, May 21, 2019 @ 09:05 PM gHale

Texas Legislature passed a bill that would support a fledgling industry that aims to reduce waste by returning plastic back to its original chemical components.

When that happens, it can then be reused for fuels and feedstocks for new plastic products.

RELATED STORIES

EPA OKs Hazardous Waste Pharmaceutical Standards

Duke Energy Working on IN Coal Ash Plan
NERC Fines NC Utility for CIP Violations
Hurricane Causes Coal Ash Breach

The bill, supported by Chevron Phillips Chemical of the Woodlands and the Texas oil major Exxon Mobil, is a response to the growing outcry over plastic waste choking the world’s oceans, contaminating soil and threatening marine and wild life. Chemical recycling is not only viewed by chemical makers as a way to reduce plastic pollution, but also as a new and potentially $10 billion industry.

Chemical recycling uses chemical processes to convert plastic waste into fuels to use in cars or manufacturing feedstocks that can be turned into new plastics. Although chemical recycling itself isn’t new, more petrochemical companies are investing in improving the technology to make it work on a commercial scale.

The bill, which was sent to Gov. Greg Abbott’s office to be signed into law, would regulate chemical recycling operations as manufacturing plants, rather than solid waste disposal sites, a designation that would spare chemical recyclers from regulations imposed on solid waste sites. The plants would still have to comply with state and federal air, water and other environmental laws.

The regulatory certainty provided by the legislation would make it easier for companies to invest in and obtain financing for chemical recycling agreements, said Craig Cookson, senior director of recycling and recovery at American Chemistry Council, the chemical industry trade group.

The bill is part of a national push by the petrochemical industry to promote chemical recycling. Texas is the sixth state to pass such legislation — joining Florida, Wisconsin, Georgia, Iowa and Tennessee, and similar bills are proposed in Rhode Island, South Carolina and Illinois.

Cookson said the significance of the legislation is especially big in Texas, which as the nation’s largest chemical manufacturing industry. Converting just 25 percent of the state’s plastic waste into manufacturing feedstocks and transportation fuels could support 40 chemical recycling plants and generate $501 million in economic output annually, ACC estimates.

Monday, May 20, 2019 @ 05:05 PM gHale

It may be hard to think of a misconfigured system as a threat, but it can be the silent killer. To that point, publicly disclosed misconfiguration incidents increased 20 percent year-over-year, a new report found.

While there was a rise in incidents, on the positive side, misconfigurations were not responsible for as many compromised records as the year before. There was a 52 percent decrease in records compromised because of this threat vector, according to the IBM X-Force Threat Intelligence Index 2019.

RELATED STORIES
Manufacturing, Energy Targeted Industries
Manufacturing BEC Victims: Report
C-Suite a Big Attack Target: Report
Manufacturing Report: Financial Attacks on Rise

IBM Security released the IBM X-Force Threat Intelligence Index annually, which summarizes the most prominent threats raised by our research teams from over the past year.

Misconfigured cloud servers that include publicly accessible cloud storage, unsecured cloud databases, and improperly secured rsync backups, or open Internet connected network area storage devices contributed to the exposure of more than 990 million records in 2018. This represents 43 percent of the more than 2.7 billion compromised records tracked by X-Force research for the year.

While this number is notably lower than the 2 billion records compromised in 2017, the total number of publicly disclosed incidents that were attributed to misconfigured assets still increased 20 percent, year-over-year, the report said.

A 2018 survey indicated that misconfiguration is now the single-biggest risk to cloud security, with 62 percent of surveyed IT and security professionals noting it as a problem, followed by misuse of employee credentials or improper access at 55 percent, and non-secure interfaces at 50 percent.

Misconfigured systems often give attackers access to a plethora of data including email addresses, user names, passwords, credit card and health data, and national identification numbers. In one of the largest incidents in 2018, a major marketing firm leaked 340 million records of personal data including addresses, phone numbers, family structures, and extensive profiling data.

Misconfigured systems could potentially expose internal company communications across a firm’s entire global footprint and even lead to detrimental exposure of intellectual property, trade secrets, and the organization’s strategic plans, the report said.

Leaked login data from misconfigured assets can be used in targeted brute-force attacks where user IDs and passwords are reused across multiple assets and websites, the report said. Exposed data could also be used as part of larger identity theft schemes and to perform fraudulent activity. While most publicly disclosed breaches involving misconfigurations appear to be the result of inadvertent actions, a malicious insider could purposefully expose data and make it appear as an unintentional act.

Monday, May 20, 2019 @ 04:05 PM gHale

Kaspersky Lab researchers created detection strategies for a new Microsoft RDP vulnerability to help all security vendors prepare and protect.

Microsoft issued a patch May 17 for a “wormable” Remote Desktop Protocol vulnerability the software giant said could be quickly exploited by attackers.

RELATED STORIES
Malware Beware: Update Windows ASAP
Manufacturing Report: Financial Attacks on Rise
Siemens, TÜV SÜD Partner on Safety-Security
Security Spotlight: Triton Fallout, Securing Supply Chain

Kaspersky Lab researchers analyzed and successfully created a detection strategy for the vulnerability. They are making this available to colleagues across the security industry so others can create their own detection strategies.

“We analyzed the vulnerability and can confirm that it is exploitable. We have therefore developed detection strategies for attempts to exploit the vulnerability and would now like to share those with trusted industry parties, so that together we can build a shield around all our customers before the attackers figure out what to do and unleash another devastating worm on the world,” – said Boris Larin, security researcher at Kaspersky Lab.

There is a critical Remote Code Execution vulnerability in Remote Desktop Services, formerly known as Terminal Services, that affects older versions of Windows.

“This vulnerability is pre-authentication and requires no user interaction. In other words, the vulnerability is ‘wormable,’ meaning that any future malware that exploits this vulnerability could propagate from vulnerable computer to vulnerable computer in a similar way as the WannaCry malware spread across the globe in 2017,” said Simon Pope, director of incident response at Microsoft Security Response Center (MSRC).

While Microsoft observed no exploitation of this vulnerability, which has a case number of CVE-2019-0708, it is likely attackers will write an exploit for this vulnerability and incorporate it into their malware.

“Now that I have your attention,” Pope said in the post, “it is important that affected systems are patched as quickly as possible to prevent such a scenario from happening. In response, we are taking the unusual step of providing a security update for all customers to protect Windows platforms, including some out-of-support versions of Windows.

Vulnerable in-support systems include Windows 7, Windows Server 2008 R2, and Windows Server 2008. Downloads for in-support versions of Windows can be found in the Microsoft Security Update Guide. Customers who use an in-support version of Windows and have automatic updates enabled are automatically protected. 

Out-of-support systems include Windows 2003 and Windows XP. If a user is working with an out-of-support version, the best way to address this vulnerability is to upgrade to the latest version of Windows. “We are making fixes available for these out-of-support versions of Windows in KB4500705,” Pope said.

Security vendors who would like to receive further details should contact Kaspersky Lab mailto:nomoreworm@kaspersky.com

Monday, May 20, 2019 @ 02:05 PM gHale

A former CIA officer will spend the next 20 years of his life behind federal bars after his conviction for conspiracy to transmit national defense information to an agent of China.

Kevin Patrick Mallory, 62, of Leesburg, Virginia, received 20 years in prison Friday which will be followed by five years of supervised release, said officials at the Department of Justice (DoJ).

RELATED STORIES
Nine Charged in Online Identity Theft
Two Indicted for Anthem Attack
Feds Bust a Darknet Infrastructure Force
Accused ‘Malvertiser’ Extradited to U.S.

“This sentence, together with the recent guilty pleas of Ron Hansen in Utah and Jerry Lee in Virginia, deliver the stern message that our former intelligence officers have no business partnering with the Chinese, or any other adversarial foreign intelligence service,” said Assistant Attorney General John Demers.

“Mallory not only put our country at great risk, but he endangered the lives of specific human assets who put their own safety at risk for our national defense,” said. U.S. Attorney G. Zachary Terwilliger for the Eastern District of Virginia. “There are few crimes in this country more serious than espionage, and this office has a long history of holding accountable those who betray our country.”

“U.S. Government employees are trusted to keep the nation’s secrets safe,” said Assistant Director in Charge Nancy McNamara of the FBI’s Washington Field Office, “and this case shows the violation of that trust and duty will not be accepted.”

Mallory was found guilty by a federal jury in June 2018 of conspiracy to deliver, attempted delivery, delivery of national defense information to aid a foreign government and making material false statements. The district court subsequently ordered acquittal as to the delivery and attempted delivery of national defense information counts due to lack of venue.

According to court records and evidence presented at trial, in March and April 2017, Mallory, a former U.S. intelligence officer, travelled to Shanghai to meet with an individual, Michael Yang, who held himself out as a People’s Republic of China think tank employee, but whom Mallory assessed to be a Chinese Intelligence Officer.

Mallory, a United States citizen who speaks fluent Mandarin Chinese, consented to an FBI review of a covert communications (covcom) device he had been given by Yang to facilitate covert communications between the two, DoJ said.

Analysis of the device, which was a Samsung Galaxy smartphone, revealed a number of communications in which Mallory and Yang talked about classified information that Mallory could sell to the PRC’s intelligence service. FBI analysts were able to determine Mallory had completed all of the steps necessary to securely transmit at least five classified U.S. government documents via the covcom device, one of which contained unique identifiers for human sources who had helped the United States government, DoJ officials said.

At least two of the documents were successfully transmitted, and Mallory and Yang communicated about those two documents on the covcom device.

Monday, May 20, 2019 @ 11:05 AM gHale

More and more people interact with the Internet of Things (IoT) in daily life.

IoT includes the devices and appliances in homes – such as smart TVs, virtual assistants like Amazon’s Alexa or learning thermostats like Nest – that connect to the Internet.

RELATED STORIES
New Way to Advance Cybersecurity
AI Learns to Grow
AI Alert: Helping Robots Remember
Cyber Nudge to Change Password

IoT also includes wearables such as the Apple Watch or Bluetooth chips that keep track of car keys. On top of that, cars, if equipped with sensors and computers, are also part of the IoT.

“Traditionally, when you think about the Internet, it’s someone on a computer communicating with something out in the world – usually someone else on a computer,” said Perry Alexander, AT&T Foundation Distinguished Professor of Electrical and Computer Science and director of the Information and Telecommunication Technology Center at the University of Kansas. “The Internet of Things is called that because now we have things talking to other things on the Internet without human intervention.”

IoT Vulnerabilities
But in an age where data theft and cyberattacks are increasingly routine, the IoT has security vulnerabilities that must be addressed as the popularity of IoT devices grows.

“These devices are characterized by being low-capability,” said Alexander. “The security story with the IoT is pretty awful. Because these devices are cheap and small, you can’t add much capability to achieve the level of security you might want to achieve.”

Alexander is leading a multidisciplinary team at KU, including computer scientists, electrical and computer engineers, psychologists, sociologists and philosophers, to tackle the fundamental science underpinning the security of the IoT. The team has just received funding from the National Security Agency (NSA) to shore up the cybersecurity of the IoT, developing the technology that could be integrated into consumer technology in the coming few years.

“The NSA for the last seven years has had a collection of universities they call ‘lablets’ that execute a collection of projects for them – we were able to compete this year and were one of six selected to host these lablets,” Alexander said. “These are places where the NSA contracts foundational research in the style of the National Science Foundation – big-thinking research. Lablets are centered around the NSA hard problems, specific problems the agency feels they need to solve if they’re going to make progress toward solving our cybersecurity problems.”

One aspect of the research at KU will investigate solutions to “side-channel attacks,” which include Spectre and Meltdown, vulnerabilities revealed to exist in central processor computer chips manufactured in the past two decades.

“A side-channel attack is a way of communicating that’s unintended,” Alexander said. “When you go on your web browser to a website, that path is intended. Unfortunately, in any computer system there are ways to communicate that are unintended. Those are side-channel attacks. A bad guy can use these vulnerabilities in everything from a state-sponsored attack to taking credit card numbers.”

Other efforts will focus on securing information in the cloud, where data is saved on remote servers instead of a personal or local machine.

Cloud Protection
“Almost all IoT devices share or store their information in the cloud,” said Alexander. “If you have an IoT in your house, you probably have a hub that talks to the cloud. How do you protect the information coming from your house, take it into the cloud and protect it while it’s there?”

The team also plans to find ways to enhance resilience, improving IoT devices’ ability to withstand unforeseen interruptions, or come back online as soon as interruptions are solved.

“If you think about a car hitting a telephone pole or a switch going bad or a lightning strike – this pulls part of your network offline,” Alexander said. “Resilience means understanding what capabilities you still have when part of your system goes down and making sure your network can recover once the problem is fixed. You as a human being are very resilient. When you cut your finger making dinner, you don’t collapse. Your skin grows back – in a week you don’t even know it happened. What properties does your skin exhibit that we could take and put in computer systems that would allow them to behave in a similar way?”

Perry and his colleagues also hope to improve trust between computers that theoretically could scale upward to encompass all the computers on the Internet.

“When my computer accesses another computer, how do I trust that computer to be in a good state?” he asked. “If you and I wanted our computers to talk, and I wanted to trust your computer hadn’t been damaged or compromised in some way, that’s doable. Now, think about all the computers on a college campus — that’s still tiny. Now think about all the computers in the world, that’s different. Originally, you could draw all the nodes for the entire Internet on the back of a napkin. Now we don’t even know how big it is, it’s so expansive and pervasive.”

Much of the work under the new contract combines expertise in computing and communications with multidisciplinary expertise in human behavior and thinking.

“A lot of cybersecurity is related to human behavior – things as simple as are you using strong passwords, or how are you using the internet?” Alexander said.

Friday, May 17, 2019 @ 04:05 PM gHale

There is a new framework in development for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks.

Using the framework means it is possible to learn a new task that can make the AI better at performing previous tasks, a phenomenon called backward transfer.

RELATED STORIES
AI Alert: Helping Robots Remember
Cyber Nudge to Change Password
Collaboration on National Security, Science
Humans Force Technology into Traffic Jams

“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at North Carolina State University and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”

“Deep neural network AI systems are designed for learning narrow tasks,” said Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”

Learn to Grow
“We have proposed a new framework for continual learning, which decouples network structure learning and model parameter learning,” said Yingbo Zhou, co-lead author of the paper and a research scientist at Salesforce Research. “We call it the Learn to Grow framework. In experimental testing, we’ve found that it outperforms previous approaches to continual learning.”

To understand the Learn to Grow framework, think of deep neural networks as a pipe filled with multiple layers. Raw data goes into the top of the pipe, and task outputs come out the bottom. Every “layer” in the pipe is a computation that manipulates the data in order to help the network accomplish its task, such as identifying objects in a digital image. There are multiple ways of arranging the layers in the pipe, which correspond to different “architectures” of the network.

When asking a deep neural network to learn a new task, the Learn to Grow framework begins by conducting something called an explicit neural architecture optimization via search. What this means is that as the network comes to each layer in its system, it can decide to do one of four things: skip the layer; use the layer in the same way that previous tasks used it; attach a lightweight adapter to the layer, which modifies it slightly; or create an entirely new layer.

This architecture optimization effectively lays out the best topology, or series of layers, needed to accomplish the new task. Once this is complete, the network uses the new topology to train itself on how to accomplish the task – just like any other deep learning AI system.

“We’ve run experiments using several datasets, and what we’ve found is that the more similar a new task is to previous tasks, the more overlap there is in terms of the existing layers that are kept to perform the new task,” Li said. “What is more interesting is that, with the optimized – or “learned” topology – a network trained to perform new tasks forgets very little of what it needed to perform the older tasks, even if the older tasks were not similar.”

Better Accuracy
The researchers also ran experiments comparing the Learn to Grow framework’s ability to learn new tasks to several other continual learning methods, and found the Learn to Grow framework had better accuracy when completing new tasks.

To test how much each network may have forgotten when learning the new task, the researchers then tested each system’s accuracy at performing the older tasks, and the Learn to Grow framework again outperformed the other networks.

“In some cases, the Learn to Grow framework actually got better at performing the old tasks,” said Caiming Xiong, the research director of Salesforce Research and a co-author of the work. “This is called backward transfer, and occurs when you find that learning a new task makes you better at an old task. We see this in people all the time; not so much with AI.”

Friday, May 17, 2019 @ 04:05 PM gHale

Boston Red Sox star outfielder Mookie Betts steps up to the plate on a 3-2 count, studies the pitcher and the situation, gets the go-ahead from third base, tracks the ball’s release, swings … and gets a single up the middle. Just another trip to the plate for the reigning American League Most Valuable Player.

Betts has honed natural reflexes, years of experience, knowledge of the pitcher’s tendencies, and an understanding of the trajectories of various pitches. What he sees, hears, and feels seamlessly combines with his brain and muscle memory to time the swing that produces the hit.

RELATED STORIES
Cyber Nudge to Change Password
Collaboration on National Security, Science
Humans Force Technology into Traffic Jams
Automating a Fleet of Drones

Now apply the knowledge Betts gleaned after years of experience and ask if a robot can do the same thing? The answer is no, not today.

The robot would need to use a linkage system to slowly coordinate data from its sensors with its motor capabilities. And it’s memory is horrible.

But that all may change as there is a new way to combine perception and motor commands using the hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation — how robots translate what they sense into what they do.

Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” is a paper written by University of Maryland computer science Ph.D. students Anton Mitrokhin and Peter Sutor, Jr.; Cornelia Fermüller, an associate research scientist with the University of Maryland Institute for Advanced Computer Studies; and Computer Science Professor Yiannis Aloimonos. Mitrokhin and Sutor are advised by Aloimonos.

Integration is Key
Integration is the most important challenge facing the robotics field. A robot’s sensors and the actuators that move it are separate systems, linked together by a central learning mechanism that infers a needed action given sensor data, or vice versa.

The cumbersome three-part AI system – each part speaking its own language – is a slow way to get robots to accomplish sensorimotor tasks. The next step in robotics will be to integrate a robot’s perceptions with its motor capabilities. This fusion, known as “active perception,” would provide a more efficient and faster way for the robot to complete tasks.

In the new computing theory, a robot’s operating system would be based on hyperdimensional binary vectors (HBVs), which exist in a sparse and extremely high-dimensional space. HBVs can represent disparate discrete things – for example, a single image, a concept, a sound or an instruction; sequences made up of discrete things; and groupings of discrete things and sequences. They can account for all these types of information in a meaningfully constructed way, binding each modality together in long vectors of 1s and 0s with equal dimension. In this system, action possibilities, sensory input and other information occupy the same space, are in the same language, and are fused, creating a kind of memory for the robot.

A hyperdimensional framework can turn any sequence of “instants” into new HBVs, and group existing HBVs together, all in the same vector length. This is a natural way to create semantically significant and informed “memories.” The encoding of more and more information in turn leads to “history” vectors and the ability to remember. Signals become vectors, indexing translates to memory, and learning happens through clustering.

The robot’s memories of what it has sensed and done in the past could lead it to expect future perception and influence its future actions. This active perception would enable the robot to become more autonomous and better able to complete tasks.

Knows What to Look For
“An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception,” Aloimonos said. “It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends.”

“Our hyperdimensional framework can address each of these goals.”

Applications of the Maryland research could extend far beyond robotics. The ultimate goal is to be able to do AI itself in a fundamentally different way: From concepts to signals to language. Hyperdimensional computing could provide a faster and more efficient alternative model to the iterative neural net and deep learning AI methods currently used in computing applications such as data mining, visual recognition and translating images to text.

“Neural network-based AI methods are big and slow, because they are not able to remember,” Mitrokhin said. “Our hyperdimensional theory method can create memories, which will require a lot less computation, and should make such tasks much faster and more efficient.”