From space, powerful thunderstorms look like boiling water

Storm systems just starting to form over the Southern Plains on May 20, 2019.
Storm systems just starting to form over the Southern Plains on May 20, 2019.
Image: noaa

The most potent thunderstorms roil and churn, like a pot of boiling water.

With the National Oceanic and Atmospheric Administration’s (NOAA) latest weather-imaging satellites, this aggressive storm behavior is easily visible from space. Such dynamic thunderstorm activity was on full display Monday, when conditions ripe for severe weather and tornadoes swirled over the Southern Plains. NOAA’s GOES-16 satellite captured the action from some 22,000 miles above Earth. 

“It looks like a big bomb going off,” said Jeff Weber, a meteorologist with the University Corporation for Atmospheric Research.

The roiling storms here are supercells, a type of violent thunderstorm that can spawn tornadoes. And indeed, many of these May 20 supercells did form twisting columns of air that swept the ground in the region, noted Weber.

The key elements of this cloud churning appearance are updrafts — potent winds shooting up through a thunderstorm. “The ‘boiling appearance’ you are seeing is due to the strength of the updraft of the storm,” said Kristin Calhoun, a research scientist at NOAA’s National Severe Storms Laboratory.

“It looks like a big bomb going off.”

The very nature of thunderstorms is to rapidly transport heat and moisture up from the ground and into the sky. “It rises six to eight miles in the atmosphere in a pretty short amount of time,” noted Brian Tang, an atmospheric scientist at the University at Albany. These rising winds travel at 30 to 50 mph, but have hit speeds of up to 100 mph, Tang said.

Eventually the warm air and water-rich clouds reach the top of the thunderstorm where it “billows out,” explained Weber. Gravity then pulls the clouds back down, creating the roiling effect. 

“That’s indicative of a very powerful storm,” said Weber. 

In severe-weather prone places, like the U.S. plains, a calm cloud can rapidly transform into a fuming supercell thunderstorm. That’s why, when viewed from space, these storms sometimes appear to burst out of the atmosphere. “On these really violent days we can see a cloud go from a normal cloud to a severe thunderstorm in a matter of 20 minutes,” said Stephen Strader, a severe weather expert at Villanova University who chases these storms through the U.S. plains. “Within 30 minutes [the storm] could have a tornado warning.” 

When we view NOAA’s satellite imagery, though, we’re seeing a sped up version of what’s transpired on Earth. It’s a time-lapse of detailed satellite photography. But the boiling motion is the same. “It’s moving,” said Tang. “Just like a pot of water on a stove.”

Today, this boiling atmospheric behavior is now clearly visible because NOAA’s newest weather imaging satellites, GOES-16 and GOES-17, can take highly-detailed images every 30 seconds. GOES-16, which captured the roiling storms above the Texas Panhandle, is situated over the equator and can see the entire U.S. 

A Colorado supercell on May 19, 2019.

A Colorado supercell on May 19, 2019.

Image: Kristin Calhoun / Noaa

On May 20, a number of powerful supercell thunderstorms formed because the right ingredients were available and then mixed together. There were bounties of moisture, colliding masses of warm and cool air, and amplified atmospheric instability as air within the developing storms twisted and changed direction while rising even higher.

Severe weather pummeled the region, infrastructure was mangled, trailer homes demolished, and people hurt — but there weren’t as many supercell storms as forecasters projected, explained Strader. “The models indicated that this would be a historic event,” said Strader. “That’s what didn’t unfold. Society got luckier than we thought was possible.”

That’s because in Oklahoma a cap of warm air suppressed one of the primary storm ingredients, instability, explained Strader. This cap, born in Mexico, sat over the thunderstorms, keeping a lid on some of the storm activity, Strader explained. 

But many roiling storms still formed. And some 20 twisters were spotted in the greater region.

“It certainly was not a busted forecast,” said Weber

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f85981%252f120f5e1f 7646 4214 ac05 8e5ec6b6f03d.png%252foriginal.png?signature=xh6iamctwja5xroqir8hv1skfzy=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

  

  

Let’s block ads! (Why?)

Link to original source

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Let’s block ads! (Why?)

Link to original source

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Let’s block ads! (Why?)

Link to original source

This clever transforming robot flies and rolls on its rotating arms

There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.

Conceived by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?

Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.

Of course you could have a drone that simply has a couple of wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient because the same motor drives the rotors and wheels at the same time — though when rolling, the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase and high clearance that make it much more capable on rough terrain.

You can watch FSTAR fly, roll, transform, flatten and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:

[embedded content]

The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs or simply ascend and fly to a new location, give FSTAR considerable adaptability.

“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.

Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations and the military.

Let’s block ads! (Why?)

Link to original source

NASA's new flying robot gets its first hardware check in space

An Astrobee robot, designed to help astronauts in space.
An Astrobee robot, designed to help astronauts in space.
Image: NASA

NASA’s new robot is getting ready for work.

Astrobee, a free-flying robot system that’s designed to give astronauts a hand in space, has had its first hardware checks on the International Space Station.

The system is actually a trio of robots, named Honey, Queen, and Bumble, which are propelled by electric fans and can return to their docking station to recharge their batteries. Two of the robots, Bumble and Honey, were launched to the space station on Apr. 17. 

NASA posted a photo of astronaut Anne McClain, who performed the first series of tests on Astrobee, which included checking the robot’s avionics, cameras, propulsion, and docking for power and data transfer.

Anne McClain checks Astrobee.

Anne McClain checks Astrobee.

Image: NASA

Astrobee is a test to see how robots can take care of spacecraft when astronauts are away, which NASA explained will be crucial for deep-space missions, such as its plan to return to the moon.

The robots feature cameras, microphones, and other sensors to help operators on the ground to monitor conditions. 

[embedded content]

They can fly independently, or be controlled, allowing astronauts to concentrate on more important tasks. The robots are modular too, which means more features can be added when needed.

It’ll be a little while until the system gets to work, with more tests to run until its project commissioning date sometime around October or November. 

Researchers are also planning more complex experiments, including carrying payloads, which will begin in 2020.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91141%252ffcc39375 d0f5 44f4 9fa9 a5c39ba5764a.jpg%252foriginal.jpg?signature=qjbw6pww7v3k18nktunibbcm7zc=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Let’s block ads! (Why?)

Link to original source