Parents may be able to spot ear infections with a paper cone and an app

Sponsored Links


Dennis Wise/University of Washington

Researchers are working on a smartphone app that could help diagnose ear infections. As NPR reports, the app uses the phone’s microphone, its speaker and a small paper cone. In its current form, the app sends short, sound pulses through a funnel and into the ear canal. It then measures the echo of that sound, and an algorithm uses the reading to predict if there’s fluid behind the eardrum, one of the common symptoms of infection.

The team of researchers — from the University of Washington and the Seattle Children’s Research Institute — released their initial findings in Science Translational Medicine today. In their study, about 50 children had their ears checked with the app, and the tool was correct about 85 percent of the time, which is comparable to technology used in clinical settings. But as NPR reports, the app is still in development, and it will need FDA approval before it hits the market.

The researchers hope this might help parents diagnose ear infections, but specialists point out that not all fluid behind the eardrum indicates an infection. Not long ago, the Apple Watch heart monitor, which can warn of irregular heart rhythms, faced similar concerns. Some initially feared that its results could be false positives, but a recent study by Stanford University suggests otherwise. Of course, health-based apps have become increasingly popular, and the FDA has approved products like a personal ECG device, an app-connected inhaler and a contraceptive app, all of which might help pave the way for this product.

Let’s block ads! (Why?)

Link to original source

Bioengineers 3D print complex vascular networks

Sponsored Links


Jordan Miller/Rice University

Bioengineers are one step closer to 3D printing organs and tissues. A team led by Rice University and the University of Washington have developed a tool to 3D print complex and “exquisitely entangled” vascular networks. These mimic the body’s natural passageways for blood, air, lymph and other fluids, and they will be essential for artificial organs.

For decades, one of the challenges in replicating human tissues has been figuring out a way to get nutrients and oxygen into the tissue and how to remove waste. Our bodies use vascular networks to do this, but it’s been hard to recreate those in soft, artificial materials.

This new tool overcomes those challenges by printing thin layers of a liquid, pre-hydrogel solution, which becomes solid when it’s exposed to blue light. This allowed the scientists to create biocompatible gels with intricate internal architecture similar to the human body’s vascular networks.

The researchers relied on other open-source projects to create their tool — called the stereolithography apparatus for tissue engineering, or SLATE. And as a way of giving back, they’ve made SLATE open source, as well. Their findings were published in Science this week, and all of their experiment data is free and open to the public. While the researchers say we’re just beginning to understand the complex form and function of the body’s structures, they hope this will help make 3D-printed organs a viable option sooner rather than later.

Let’s block ads! (Why?)

Link to original source

Microsoft device stores digital info as DNA


Microsoft

Microsoft is on its way to replacing data centers with DNA. The company and researchers from the University of Washington have successfully automated the process to translate digital information into DNA and back to bits. They now have the first, full end-to-end automated DNA storage device. And while there’s room for improvement, Microsoft hopes this proof-of-concept will advance DNA storage technology.

In its first run, the $10,000 prototype converted “HELLO” into DNA. The device first encoded the bits (1’s and 0’s) into DNA sequences (A’s, C’s, T’s, G’s). It then synthesized the DNA and stored it as a liquid. Next, the stored DNA was read by a DNA sequencer. Finally, the decoding software translated the sequences back into bits. The 5-byte message took 21 hours to convert back and forth, but the researchers have already identified a way to reduce the time required by 10 to 12 hours. They’ve also suggested ways to reduce the cost by several thousand dollars.

In nucleotide form HELLO (01001000 01000101 01001100 01001100 01001111 in bits) yielded approximately 1 mg of DNA, and just 4 micrograms were retained for sequencing. As Technology Review notes, at that rate, all of the information stored in a warehouse-sized data center could fit into a few standard-size dice. Once the technique is perfected, it could store data much longer than we’re currently able to. As Microsoft points out, some DNA has held up for tens of thousands of years in mammoth tusks and the bones of early humans. That’s why Microsoft and other tech companies are eying DNA as a way to solve looming data storage problems. As previously reported, Microsoft’s formal goal is to have an operational DNA-based storage system working inside a data center by the end of this decade.

DNA storage isn’t entirely new, but the novelty here is that this system is fully automated. Before it can succeed commercially, though, the cost to synthesize DNA and extract the information is stores needs to come down. In other words, we need a way to synthesize DNA cost-efficiently. While it may sound a bit sci-fi, we could all be storing data as DNA before we know it.

Let’s block ads! (Why?)

Link to original source

Can you tell between a real and computer-generated face?

[unable to retrieve full-text content]

TwitterFacebook

The test is simple enough: Guess which human face is real and which is not. You’d be surprised how tricky it can get. Read more…

More about Mashable Video, Artificial Intelligence, Nvidia, University Of Washington, and Gan

Link to original source

AWS launches Neo-AI, an open-source tool for tuning ML models

AWS isn’t exactly known as an open-source powerhouse, but maybe change is in the air. Amazon’s cloud computing unit today announced the launch of Neo-AI, a new open-source project under the Apache Software License. The new tool takes some of the technologies that the company developed and used for its SageMaker Neo machine learning service and brings them (back) to the open source ecosystem.

The main goal here is to make it easier to optimize models for deployments on multiple platforms — and in the AWS context, that’s mostly machines that will run these models at the edge.

“Ordinarily, optimizing a machine learning model for multiple hardware platforms is difficult because developers need to tune models manually for each platform’s hardware and software configuration,” AWS’s Sukwon Kim and Vin Sharma write in today’s announcement. “This is especially challenging for edge devices, which tend to be constrained in compute power and storage.”

Neo-AI can take TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models and optimize them. AWS says Neo-AI can often speed these models up to twice their original speed, all without the loss of accuracy. As for hardware, the tools supports Intel, Nvidia, and ARM chips, with support for Xilinx, Cadence, and Qualcomm coming soon. All of these companies, except for Nvidia, will also contribute to the project.

“To derive value from AI, we must ensure that deep learning models can be deployed just as easily in the data center and in the cloud as on devices at the edge,” said Naveen Rao, General Manager of the Artificial Intelligence Products Group at Intel. “Intel is pleased to expand the initiative that it started with nGraph by contributing those efforts to Neo-AI. Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”

In addition to optimizing the models, the tool also converts them into a new format to prevent compatibility issues and a local runtime on the devices where the model then runs handle the execution.

AWS notes that some of the work on the Neo-AI compiler started at the University of Washington (specifically the TVM and Treelite projects). “Today’s release of AWS code back to open source through the Neo-AI project allows any developer to innovate on the production-grade Neo compiler and runtime.” AWS has somewhat of a reputation of taking open source projects and using them in its cloud services. It’s good to see the company starting to contribute back a bit more now.

In the context of Amazon’s open source efforts, it’s also worth noting that the company’s Firecracker hypervisor now supports the OpenStack Foundation’s Kata Containers project. Firecracker itself is open source, too, and I wouldn’t be surprised if Firecracker ended up as the first open source project that AWS brings under the umbrella of the OpenStack Foundation.

Let’s block ads! (Why?)

Link to original source