CapitalG co-founder introduces $175M early-stage venture fund

Valo Ventures, a new firm focused on social, economic and environmental megatrends, has closed on $175 million for its debut venture capital fund.

The effort is led by Scott Tierney, a co-founder of Alphabet’s growth investing unit CapitalG, as well as Mona ElNaggar, a former managing director of TIFF Investment Management and Julia Brady, who previously worked as a director at The Via Agency, a communications workshop.

Google is like being a kid in a candy store,” Tierney tells TechCrunch. “It’s a great place to be. For me, I thought, ‘alright, I’ve been here for seven years, I have this opportunity to create my own fund and be more entrepreneurial and take all the learnings I was fortunate to have inside of Google and apply them.’ ”

Tierney joined Google in 2011 as a director of corporate development after five years as a managing director at Steelpoint Capital Partners. In 2013, he co-founded CapitalG, where he served as a partner for the next two years. He completed his Google stint as a director of corporate development and strategic partnerships at Nest Labs, a title he held until mid-2018.

The Valo Ventures partners plan to participate in Series A, B and C deals for startups located in North America and Europe. Specifically, Valo is looking for businesses solving problems within climate change, urbanization, autonomy and mobility. 

The goal is to bring an ESG (environmental, social and corporate governance) perspective to venture capital, where investors infrequently take a mission-driven approach to deal-making. To date, Valo Ventures has deployed capital to Landit, a career pathing platform for women, and a stealth startup developing an AI platform for electricity demand and supply forecasting.

Let’s block ads! (Why?)

Link to original source

Is your product’s AI annoying people?

Artificial intelligence is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.

We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers that end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.

Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.

After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.

Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.

In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.

However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.

For two-lane traffic highways that are not busy, the Tesla software might work reasonably well. However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.

Post Intelligence robot

Another example from the internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.

However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal — making a simple reservation.

On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.

As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.

From a user-experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.

An aspirational goal should be to delight this adjacent group of people enough that they would move toward being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method that combines the observations of people as they interact with processes and the product.

A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”

That’s just human intelligence, and it’s not artificial.

Let’s block ads! (Why?)

Link to original source

AI could be the key to catching Type 1 diabetes much earlier

Will AI lead to a quicker diagnosis of diabetes, a condition often called the silent killer? IBM researchers are hoping so. They recently announced an AI-powered screening tool that could potentially identify Type 1 diabetes antibodies in people’s blood.

For the millions of people who live with Type 1 diabetes globally, everyday reality involves significant self monitoring. Without that supervision, the pancreas fails to produce enough insulin, which is used to move energy-providing blood sugar to the body’s cells. From daily insulin injections to ensuring that blood glucose levels are in check with nutrition and exercise plans, it’s a condition that requires patients to stay highly vigilant about their health.

About 1.25 million people have Type 1 diabetes in the United States alone, with an estimated 40,000 new diagnoses each year, according to the American Diabetes Association. Given this, you might be surprised that no standardized screening process for the condition exists to catch it early on. Doctors generally test based on family history and other known risk factors, meaning Type 1 diabetes can appear under the radar. This can lead to sudden trips to the ER and surprise diagnoses, making the development of better screening tests a major life-saving priority for doctors.

Doctor or nutritionist with a diabetic patient explaining glycemic index

Now, here’s where AI comes in. At the American Diabetes Association’s 79th Scientific Sessions in early June, IBM and JDRF (formerly known as the Juvenile Diabetes Research Foundation), a nonprofit that spearheads Type 1 diabetes research, unveiled a predictive AI tool that has mapped the presence of Type 1 diabetes antibodies in blood to figure out exactly when and how the condition could develop. Jianying Hu, IBM fellow and global science leader of AI for health care at IBM Research, told Engadget that the AI was fed data from more than 22,000 people from the United States, Sweden and Finland.

The program pinpointed similarities among people with specific antibodies for the disease and the timeline of their Type 1 diabetes progression.

“One of the biggest potentials of this kind of work in building machine learning models for Type 1 diabetes is to be able to better identify who to monitor and how often to monitor them,” said Hu, whose team worked on this project with JDRF for more than a year. “Right now even the little we know, these antibodies are pervasive in the progression of Type 1 diabetes, but nobody knows who is more susceptible in developing them and when.”

She said these AI models could give doctors “a more personalized deadline” for how to monitor people and how often they should be tested.

In the past, Type 1 diabetes was called juvenile diabetes, because it’s generally diagnosed in kids, teens and young adults, according to the Centers for Disease Control and Prevention (CDC). However, it can impact people at any age, said Utpal Pajvani, an endocrinologist and assistant professor at Columbia University Medical Center.

Pajvani, who is not affiliated with this project, explained that general practice dictates screening only people who are “high risk.” This means they have a first-degree family member who has been diagnosed. Given that it is so uncommon, he said it isn’t something that would warrant screening the population at large.

He cautioned that wide-ranging screening methods like this could lead to a lot of false positives.

“If you’re screening for a relatively uncommon condition, you’re going to end up with a lot of false positives. If you test all people, [including] those who are [at a] relatively low risk of developing it who have an absence of family history or other autoimmune disease, you will have a much higher rate of identifying people who might have a positive test for an antibody but are [at] low risk of having the disease,” Pajvani told Engadget.

Sample blood for screening diabetic test

The risk of applying a broad screening test for a rare disease like this means you could also make people unnecessarily anxious about a condition they are unlikely to get. Essentially, no perfect screening test exists that would be without false positives, he added.

Despite these critiques, Pajvani sees a future for this kind of technology. People who have a high risk for diabetes and have the presence of these antibodies still don’t know what the timeline of the progression of the disease will be. This kind of AI tool could give doctors a necessary road map for charting the course of the condition, he explained.

Moving forward, Hu said that her team will soon be adding more data from Germany to be sorted by the AI. She added that another big piece of the project is working closely with physicians to see how they can apply this and how the insights gleaned from the AI can be used in clinical studies.

“I do love that people are thinking about this, and I love the idea of carrying forward the important clinical question of who will develop a condition, especially one as significant as Type 1 diabetes,” Pajvani added. “Hopefully, [I’d] like to see us in a place where technology can provide a greater understanding of how we help people.”

Pajvani said that as a clinician, he has yet to see AI move out of the theoretical and into the practical day to day of treating patients. For her part, Hu said that she thinks the presence of machine learning in medicine will only continue to “accelerate” and that it’s “extremely important” work that could lead to indispensable tools for doctors.

This AI doesn’t provide a definitive screening method today, but it offers a path for how machine learning tools could be used for faster, life-saving Type 1 diabetes diagnoses in the future.

Images: Jovanmandic via Getty Images (Doctor and patient); Gam1983 via Getty Images (Blood sugar test)

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Brian is a New York-based science, tech and health journalist. He has written about cybersecurity and Anonymous and even interviewed Neil deGrasse Tyson about his secret love of musicals (who knew?). Brian’s work has been published by The Atlantic, The Paris Review, CBS News, Engadget and Everyday Health, among others. When not following the news, Brian is an actor who has studied at The Barrow Group and Actors Connection in NYC. He sometimes blogs about fashionable dogs. Yes. Really. He graduated with honors from Brown University and has an M.A. from the Columbia University Graduate School of Journalism.


Shares

Share

Tweet

Share

Save




Comments

Let’s block ads! (Why?)

Link to original source

Intel is doing the hard work necessary to make sure robots can operate your microwave

Training computers and robots to not only understand and recognize objects (like an oven, for instance, as distinct from a dishwasher) is pretty crucial to getting them to a point where they can manage the relatively simple tasks that humans do every day. But even once you have an artificial intelligence trained to the point where it can tell your fridge from your furnace, you also need to make sure it can operate the things if you want it to be truly functional.

That’s where new work from Intel AI researchers, working in collaboration with UCSD and Stanford, comes in – in a paper presetned at the Conference on Computer Vision and Patter Recognition, the assembled research team details how they created ‘PartNet,’ a large dataset of #D objects with highly detailed, hierarchically organized and fully annotated part info for each object.

The data set is unique, and already in high demand among robotics companies, because it manages to organize objects into their segmented parts in a way that has terrific applications for building learning models for artificial intelligence applications designed to recognize and manipulate these objects in the real world. So, for instance, in the photographed example above, if you’re hoping to have a robot arm manage to turn on a microwave to reheat some leftovers, the robot needs to know about ‘buttons’ and their relation to the whole.

Robots trained using PartNet and evolutions this data set won’t be limited to just operating computer generated microwaves that looks like someone found it on a curb with a ‘free’ sign taped to the front. It includes over 570,000 parts, across more than 26,000 individual objects, and parts that are common to objects across categories are all marked as corresponding to one another – so that if an AI is trained to recognize a chair back on one variety, it should be able to recognize it on another.

That’s handy if you want to redecorate your dining room, but still want your home helper bot to be able to pull out your new chairs for guests, just like it did with the old ones.

Admittedly, my examples are all drawn from a far-flung, as-yet hypothetical future. There are plenty of near-term applications of detailed object recognition that are more useful, and part identification can likely help reinforce decision-making about general object recognition, too. But the implications for in-home robotics are definitely more interesting to ponder, and it’s an area of focus for a lot of the commercialization efforts focused around advanced robotics today.

Let’s block ads! (Why?)

Link to original source

Habana Labs launches its Gaudi AI training processor

Habana Labs, a Tel Aviv-based AI processor startup, today announced its Gaudi AI training processor, which promises to easily beat GPU-based systems by a factor of four. While the individual Gaudi chips beat GPUs in raw performance, it’s the company’s networking technology that gives it the extra boost to reach its full potential.

Gaudi will be available as a standard PCIe card that supports eight ports of 100Gb Ethernet, as well as a mezzanine card that is compliant with the relatively new Open Compute Project accelerator module specs. This card supports either the same ten 100GB Ethernet ports or 20 ports of 50Gb Ethernet. The company is also launching a system with eight of these mezzanine cards.

Last year, Habana Labs previously launched its Goya inferencing solution. With Gaudi, it now offers a complete solution for businesses that want to use its hardware over GPUs with chips from the likes of Nvidia. Thanks to its specialized hardware, Gaudi easily beats an Nvidia T4 accelerator on most standard benchmarks — all while using less power.

“The CPU and GPU architecture started from solving a very different problem than deep learning,” Habana CBO Eitan Medina told me.  “The GPU, almost by accident, happened to be just better because it has a higher degree of parallelism. However, if you start from a clean sheet of paper and analyze what a neural network looks like, you can, if you put really smart people in the same room […] come up with a better architecture.” That’s what Habana did for its Goya processor and it is now taking what it learned from this to Gaudi.

For developers, the fact that Habana Labs supports all of the standard AI/ML frameworks, as well as the ONNX format, should make the switch from one processor to another pretty painless.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the data center and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity, enabling unlimited scale,” said David Dahan, CEO of Habana Labs. “Gaudi will disrupt the status quo of the AI Training processor landscape.”

As the company told me, the secret here isn’t just the processor itself but also how it connects to the rest of the system and other processors (using standard RDMA RoCE, if that’s something you really care about).

Habana Labs argues that scaling a GPU-based training system beyond 16 GPUs quickly hits a number of bottlenecks. For a number of larger models, that’s becoming a necessity, though. With Gaudi, that becomes simply a question of expanding the number of standard Ethernet networking switches so that you could easily scale to a system with 128 Gaudis.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” said Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it enables large clusters of accelerators built using industry-standard components.”

Let’s block ads! (Why?)

Link to original source