Facebook Research is developing touchy-feely curious robots

As a social media platform with global reach, Facebook leans extensively on its artificial intelligence and machine-learning systems to keep the site online and harmful content off it (at least, some of the time). Following its announcement at the start of the month regarding self-supervised learning, computer vision, and natural language processing, Facebook on Monday shared details about three additional areas of research that could eventually lead to more capable and curious AI.

“Much of our work in robotics is focused on self-supervised learning, in which systems learn directly from raw data so they can adapt to new tasks and new circumstances,” a team of researchers from FAIR (Facebook AI Research) wrote in a blog post. “In robotics, we’re advancing techniques such as model-based reinforcement learning (RL) to enable robots to teach themselves through trial and error using direct input from sensors.”

asdf

Specifically, the team has been trying to get a six-legged robot to teach itself to walk without any outside assistance. “Generally speaking, locomotion is a very difficult task in robotics and this is what it makes it very exciting from our perspective,” Roberto Calandra, a FAIR researcher, told Engadget. “We have been able to design algorithms for AI and actually test them on a really challenging problem that we otherwise don’t know how to solve.”

The hexapod begins its existence as a pile of legs with no understanding of its surroundings. Using a reinforcement-learning algorithm, the robot slowly figures out a controller that will help it meet its goal of forward locomotion. And since the algorithm utilizes a recursive self-improvement function, the robot can monitor the information it gathers and further optimize its behavior over time. That is, the more experience the robot gains, the better it performs.

This is easier said than done, given that the robot is expected to figure out not only its location and orientation in space but its balance and momentum as well — all from a series of sensors located in the machine’s knees. By optimizing the robot’s behavior and focusing on getting it walking in as few steps as possible, Facebook taught the robot how to “walk” in a matter of hours, rather than days.

But what’s a hexapod to do once it figures out how to move? Go exploring, obviously. But it’s not so easy to induce wanderlust in robots that are typically trained to achieve a narrowly defined goal. Yet this is exactly what Facebook is trying to do, with some help from its colleagues at NYU and a robotic arm.

asdf

Previous research into imparting a sense of curiosity onto AI has focused on reducing uncertainty. Facebook’s latest efforts strive for the same goal but do so in a more structured manner.

“We actually started with a model that doesn’t know much about itself,” FAIR researcher Franziska Meier told Engadget. “At this point, the robot knows how to hold its arm, but it doesn’t actually know what actions to apply to reach a certain target.” But as the robot learns which torques need to be applied to move the arm into the next target configuration, it can eventually begin to optimize its planning.

“We use this model that tells us this, to plan ahead for a number of time steps,” Meier continued. “And we try to use this planning procedure to optimize the action sequence to achieve the task.” To prevent the robot from optimizing its routines too highly and getting caught in a loop, the research team rewarded the robot for actions that resolved uncertainty.

“We do this exploration, we actually learn a better model faster, achieve the task faster, and we learn a model that generalizes better to new tasks,” Meier concluded.

Finally, Facebook has been hard at work teaching robots how to feel. Not emotionally, but physically. And it’s leveraging a predictive deep-learning model originally designed for video. “It’s essentially a technique that can predict videos from the current state, from the current image and an action,” Calandra explained.

The team trained the AI to operate directly using raw data, in this case a high-resolution tactile sensor, rather than through a model. “Our work shows that such policies may be learned entirely without rewards, through diverse unsupervised exploratory interactions with the environment,” the researchers concluded. During the experiment, the robot was able to successfully manipulate a joystick, roll a ball and identify the correct face of a 20-sided die.

“We show that we can essentially have a robot manipulating small objects in an unsupervised manner,” Calandra said. “And what it means in practice is… we can actually predict accurately what’s going to be the output of [a given] action. This allows us to start planning into the future. We can optimize for the sequence of actions that will actually yield the desired outcome.”

Combining visual and tactile inputs could greatly improve the functionality of future robotic platforms and improve learning techniques. “To build machines that can learn by interacting with the world independently, we need robots that can leverage data from multiple senses,” the team concluded. We can only imagine what Facebook has in store for this. However the company declined to comment on potential practical applications for this research in the future.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Andrew has lived in San Francisco since 1982 and has been writing clever things about technology since 2011. When not arguing the finer points of portable vaporizers and military defense systems with strangers on the internet, he enjoys tooling around his garden, knitting and binge watching anime.

0
Shares

Share

Tweet

Share

Save




Comments

Let’s block ads! (Why?)

Link to original source

Microsoft aims to train and certify 15,000 workers on AI skills by 2022

Microsoft is investing in certification and training for a range of AI-related skills in partnership with education provider General Assembly, the companies announced this morning. The goal is to train some 15,000 people by 2022 in order to increase the pool of AI talent around the world. The training will focus on AI, machine learning, data science, cloud and data engineering and more.

In the new program’s first year, Microsoft will focus on training 2,000 workers to transition to an AI and machine learning role. And over the full three years, it will train an additional 13,000 workers with AI-related skills.

As part of this effort, Microsoft is joining General Assembly’s new AI Standards Board, along with other companies. Over the next six months, the Board will help to define AI skills standards, develop assessments, design a career framework and create credentials for AI skills.

The training developed will also focus on filling the AI jobs currently available where Microsoft technologies are involved. As Microsoft notes, many workers today are not skilled enough for roles involving the use of Azure in aerospace, manufacturing and elsewhere. The training, it says, will focus on serving the needs of its customers who are looking to employ AI talent.

This will also include the creation of an AI Talent Network that will source candidates for long-term employment as well as contract work. General Assembly will assist with this effort by connecting its 22 campuses and the broader Adecco ecosystem to this jobs pipeline. (GA sold to staffing firm Adecco last year for $413 million.)

Microsoft cited the potential for AI’s impact on job creation as a reason behind the program, noting that up to 133 million new roles may be created by 2022 as a result of the new technologies. Of course, it’s also very much about making sure its own software and cloud customers can find people who are capable of working with its products, like Azure.

“As a technology company committed to driving innovation, we have a responsibility to help workers access the AI training they need to ensure they thrive in the workplace of today and tomorrow,” said Jean-Philippe Courtois, executive vice president and president of Global Sales, Marketing and Operations at Microsoft, in a statement. “We are thrilled to combine our industry and technical expertise with General Assembly to help close the skills gap and ensure businesses can maximize their potential in our AI-driven economy.”

Let’s block ads! (Why?)

Link to original source

Health[at]Scale lands $16M Series A to bring machine learning to healthcare

Health[at]Scale, a startup with founders who have both medical and engineering expertise, wants to bring machine learning to bear on healthcare treatment options to produce outcomes with better results and less aftercare. Today the company announced a $16 million Series A. Optum, which is part of the UnitedHealth Group, was the sole investor .

Today, when people looks at treatment options, they may look at a particular surgeon or hospital, or simply what the insurance company will cover, but they typically lack the data to make truly informed decisions. This is true across every part of the healthcare system, particularly in the U.S. The company believes using machine learning, it can produce better results.

“We are a machine learning shop, and we focus on what I would describe as precision delivery. So in other words, we look at this question of how do we match patients to the right treatments, by the right providers, at the right time,” Zeeshan Syed, Health at Scale CEO told TechCrunch.

The founders see the current system as fundamentally flawed, and while they see their customers as insurance companies, hospital systems and self-insured employers; they say the tools they are putting into the system should help everyone in the loop get a better outcome.

The idea is to make treatment decisions more data driven. While they aren’t sharing their data sources, they say they have information from patients with a given condition, to doctors who treat that condition, to facilities where the treatment happens. By looking at a patient’s individual treatment needs and medical history, they believe they can do a better job of matching that person to the best doctor and hospital for the job. They say this will result in the fewest post-operative treatment requirements, whether that involves trips to the emergency room or time in a skilled nursing facility, all of which would end up adding significant additional cost.

If you’re thinking this is strictly about cost savings for these large institutions, Mohammed Saeed, who is the company’s chief medical officer and has and MD from Harvard and a PhD in electrical engineering from MIT, insists that isn’t the case. “From our perspective, it’s a win-win situation since we provide the best recommendations that have the patient interest at heart, but from a payer or provider perspective, when you have lower complication rates you have better outcomes and you lower your total cost of care long term,” he said.

The company says the solution is being used by large hospital systems and insurer customers, although it couldn’t share any. The founders also said, it has studied the outcomes after using its software and the machine learning models have produced better outcomes, although it couldn’t provide the data to back that up at that point at this time.

The company was founded in 2015 and currently has 11 employees. It plans to use today’s funding to build out sales and marketing to bring the solution to a wider customer set.

Let’s block ads! (Why?)

Link to original source

LG developed its own AI chip to make its smart home products even smarter

As its once-strong mobile division continues to slide, LG is picking up its focus on emerging tech. The company has pushed automotive, and particularly its self-driving capabilities, and today it doubled down on its smart home play with the announcement of its own artificial intelligence (AI) chip.

LG said the new chip includes its own neural engine that will improve the deep-learning algorithms used in its future smart home devices, which will include robot vacuum cleaners, washing machines, refrigerators and air conditioners. The chip can operate without an internet connection thanks to on-device processing, and it uses “a separate hardware-implemented security zone” to store personal data.

“The AI Chip incorporates visual intelligence to better recognize and distinguish space, location, objects and users while voice intelligence accurately recognizes voice and noise characteristics while product intelligence enhances the capabilities of the device by detecting physical and chemical changes in the environment,” the company wrote in an announcement.

To date, companies seeking AI or machine learning (ML) smarts at chipset level have turned to established names like Intel, ARM and Nvidia, with upstarts including Graphcore, Cerebras and Wave Computing provided VC-fueled alternatives.

There is, indeed, a boom in AI and ML challengers. A New York Times report published last year estimated that “at least 45 startups are working on chips that can power tasks like speech and self-driving cars,” but that doesn’t include many under-the-radar projects financed by the Chinese government.

LG isn’t alone in opting to fly solo in AI. Facebook, Amazon and Apple are all reported to be working on AI and ML chipsets for specific purposes. In LG’s case, its solution is customized for smarter home devices.

“Our AI C​hip is designed to provide optimized artificial intelligence solutions for future LG products. This will further enhance the three key pillars of our artificial intelligence strategy – evolve, connect and open – and provide customers with an improved experience for a better life,” IP Park, president and CTO of LG Electronics, said in a statement.

The company’s home appliance unit just recorded its highest quarter of sales and profit to date. Despite a sluggish mobile division, LG posted an annual profit of $2.4 billion last year with standout results for its home appliance and home entertainment units — two core areas of focus for AI.

Let’s block ads! (Why?)

Link to original source

Unveiling its latest cohort, Alchemist announces $4 million in funding for its enterprise accelerator

The enterprise software and services focused accelerator, Alchemist has raised $4 million in fresh financing from investors BASF and the Qatar Development Bank, just in time for its latest demo day unveiling 20 new companies.

Qatar and BASF join previous investors including the venture firms Mayfield, Khosla Ventures, Foundation Capital, DFJ, and USVP, and corporate investors like Cisco, Siemens and Juniper Networks.

While the roster of successes from Alchemist’s fund isn’t as lengthy as Y Combinator, the accelerator program has launched the likes of the quantum computing upstart, Rigetti, the soft-launch developer tool LaunchDarkly, and drone startup Matternet .

Some (personal) highlights of the latest cohort include:

  • Bayware: Helmed by a former head of software defined networking from Cisco, the company is pitching a tool that makes creating networks in multi-cloud environments as easy as copying and pasting.
  • MotorCortex.AI: Co-founded by a Stanford Engineering professor and a Carnegie Mellon roboticist, the company is using computer vision, machine learning, and robotics to create a fruit packer for packaging lines. Starting with avocados, the company is aiming to tackle the entire packaging side of pick and pack in logistics.
  • Resilio: With claims of a 96% effectiveness rate and $35,000 in annual recurring revenue with another $1 million in the pipeline, Resilio is already seeing companies embrace its mobile app that uses a phone’s camera to track stress levels and application-based prompts on how to lower it, according to Alchemist.
  • Operant Networks: It’s a long held belief (of mine) that if computing networks are already irrevocably compromised the best thing that companies and individuals can do is just encrypt the hell out of their data. Apparently Operant agrees with me.  The company is claiming 50% time savings with this approach, and have booked $1.9m in 2019 as proof, according to Alchemist.
  • HPC Hub: HPC Hub wants to  democratize access to supercomputers by overlaying a virtualization layer and pre-installed software on underutilized super computers to give more companies and researchers easier access to machines… and they’ve booked $92,000 worth of annual recurring revenue.
  • DinoPlusAI: This chip developer is designing a low latency chip for artificial intelligence applications, reducing latency by 12 times over a competing Nvidia chip, according to the company. DinoPlusAI sees applications for its tech in things like real-time AI markets and autonomous driving. Its team is led by a designer from Cadence and Broadcom and the company already has $8 million in letters of intent signed, according to Alchemist.
  • Aero Systems West Co-founders from the Air Force’s Research Labs and MIT are aiming to take humans out of drone operations and maintenance. The company contends that for every hour of flight time, drones require 7 hours of maintenance and check ups. Aero Systems aims to reduce that by using remote analytics, self-inspection, autonomous deployment, and automated maintenance to take humans out of the drone business.

Watch a livestream of Alchemist’s demo day pitches, starting at 3PM, here.

Let’s block ads! (Why?)

Link to original source