How Salesforce paved the way for the SaaS platform approach

When we think of enterprise SaaS companies today, just about every startup in the space aspires to be a platform. That means they want people using their stack of services to build entirely new applications, either to enhance the base product, or even build entirely independent companies. But when Salesforce launched Force.com, the company’s Platform as a Service in 2007, there wasn’t any model.

It turns out that Force.com was actually the culmination of a series of incremental steps after the launch of the first version of Salesforce in February, 2000, all of which were designed to make the software more flexible for customers. Company co-founder and CTO Parker Harris says that they didn’t have this goal to be a platform early on. “We were a solution first, I would say. We didn’t say let’s build a platform and then build sales-force automation on top of it. We wanted a solution that people could actually use,” Harris told TechCrunch.

The march toward becoming a full-fledged platform started with simple customization. That first version of Salesforce was pretty basic, and the company learned over time that customers didn’t always use the same language it did to describe customers and accounts — and that was something that would need to change.

Let’s block ads! (Why?)

Link to original source

Google is convinced it can get game streaming right

Phil Harrison won’t budge. As a vice president and general manager at Google, he’s spent the past 15 minutes explaining why Stadia, the company’s freshly announced game-streaming service, will actually work on the existing internet infrastructure across North America and Europe. He’s focused on the investments Google has made over the past 20 years in cloud networks, talking up the company’s 7,500 server nodes, custom CPUs and partnerships with major internet service providers.

I’m hesitant to believe him. I lived through the hype of OnLive a decade ago; we’ve heard these promises before, only to be sorely disappointed. Of course, 10 years on, Google is promising even more — seamlessly streaming games at 4K and 60fps with HDR, integrating “play now” options into YouTube, and even loading a specific section of a game via a hyperlink, on any platform, in just five seconds.

“It’s actually closer to three seconds than five seconds,” Harrison told me the day after revealing Stadia on the Google stage at GDC. “But we thought, you know what, five seconds is actually, probably a good enough promise.”

Google at GDC 2019

That’s how much confidence Google executives have in Stadia. It’s due to launch later this year (with an unknown payment model), and there’s momentum behind the entire initiative from within and outside of Google.

Harrison is a 25-year veteran of the video game industry who held leading roles at both Sony and Microsoft, and Google picked him up in early 2018; Jade Raymond is the former head of Ubisoft Montreal and executive producer of the first Assassin’s Creed games, and Google recently hired her to lead its Stadia Games and Entertainment arm. The top game-engine companies, Unreal and Unity, are partnering with Google on Stadia, alongside dozens of other studios.

At launch, Google says Stadia will be able to stream any participating game to any device — including Chromecasts, smartphones, tablets, laptops and PCs — at up to 4K and 60fps, plus it’ll enable a handful of social and YouTube tricks.

“Our platform is capable of delivering all of the quality, all of the capability that we discussed, over conventional fixed, wired internet into your home,” Harrison said.

Google at GDC 2019

The problem I have with Harrison’s repeated assurance is that it relies on things Google has control over, like server hardware and distribution, and a special relationship that Google has with major ISPs. It doesn’t directly address the perpetual problem with game streaming: A lot of people have crappy internet.

No matter how powerful Google’s cloud infrastructure is, in-home and mobile connection speeds are a potential bottleneck that developers can’t R&D away. Harrison answered this critique as follows:

“Google’s been making investments in the fundamental fabric of the internet, the networking within our data centers, the way our data centers are connected, for 20 years. We’ve been a hardware company in the data center for longer than we’ve been a hardware company in Google Home or phones. That gives us some performance advantages in terms of the way the data reaches the ISP and how that data gets to you in your home. That allows us to deliver a very, very high-quality experience.”

Internet distribution has certainly expanded since OnLive and Gaikai tried to tackle low-latency streaming. Statista estimates 109.8 million homes in the US had a fixed broadband subscription in 2017, compared with 84.5 million in 2010. Internet speeds have steadily increased over the years as well, with the US clocking an average download rate of 96.25Mbps in late 2018, according to Ookla. (Stadia recommends a minimum of 25Mbps). Plus, Google’s big Project Stream beta in October went shockingly well. All of this is good for Google.


Engadget

And yet. 96Mbps may be the average internet speed across the US, but it certainly isn’t guaranteed anywhere. The fastest and most reliable internet-delivery system, fiber-optic, is not even available in 70 percent of the country, according to the FCC. Organizations from Microsoft to federal agencies and churches are working to fill in the gaps in rural internet access worldwide, but it’s a tricky problem for any one group to pin down. That includes Google, whose own efforts to establish a national fiber network have more or less dissolved, leaving a trail of useless silvery cables in cities across the country.

Google’s relationship with ISPs is clearly strong. It has a direct-peering relationship with companies like Verizon, AT&T, Comcast and Sprint, meaning these ISPs are plugged into Google servers, giving Google a clear line of data. Of course, BroadbandNow estimates there are more than 2,000 ISPs in the US, many of them small networks serving rural areas, and Google doesn’t have the same relationship with all of them. In these cases, its data has to bounce its around just like everyone else’s.

“It’s the depth of the pairing relationships that we have with ISPs to bring Google data to the internet today,” Harrison said. “We’re able to build on top of that to build a very high-performing game experience streamed to players. Whereas other streaming services that have come before have had to go through that multi-hop scenario, we know what it takes to get to that high quality.”

Phil Harrison at GDC 2019

There’s no doubt in my mind that Stadia, once it launches, will work. It will load games and they will be playable. However, OnLive technically worked for a lot of users, too. Hell, PlayStation Now exists today and it also functions, but it’s definitely not disrupting the established video game ecosystem.

What matters is how well Stadia will work — and, perhaps more importantly, how well players will expect it to work. If (rather, when) it takes 15 seconds for a link to load a video, or if a game stutters just enough to be annoying the entire way through, or if it cuts out at random times according to the whims of a wily internet connection, Stadia will likely be viewed as a failure, regardless of how far Google’s technology has truly come.

Google is promising incredible things. Aside from 4K, HDR and 60fps, and game loading times as short as three seconds (all from a link, no less), Google is already talking about one day streaming games in 8K and 120fps. Stadia’s technology is scalable and Google is building it to last. But, first, it has to start.

“We also know that we won’t reach everywhere in the world, day one,” Harrison said. “We’re not claiming that we will reach everywhere in the world. The internet connectivity continues to grow, continues to reach more and more people at higher and higher speeds every year. There are some technologies just a little bit over the horizon which we think will be impactful.”

Catch up on all the latest news from GDC 2019 here!

Let’s block ads! (Why?)

Link to original source

LogRocket nabs $11M Series A to fix web application errors faster

Every time a visitor experiences an issue on your website, it’s going to have an impact on their impression of the company. That’s why companies want to resolve issues in a timely manner.  LogRocket, a Cambridge, MA startup, announced an $11 Million Series A investment today to give engineering and web development teams access to more precise information they need to fix issues faster.

The round was led by Battery Ventures with participation from seed investor Matrix Partners. When combined with an earlier unannounced $4 million seed round, the company has raised of total of $15 million.

The two founders, Matthew Arbesfeld and Ben Edelstein, have been friends since birth growing up together in the Boston suburbs. After attending college separately at MIT and Columbia, the two friends both moved to San Francisco where they worked as engineers building front-end applications.

The company idea grew from the founders’ own frustration tracking errors. They found that they would have to do a lot of manual research to find problems, and it was taking too much time. That’s where they got the idea for LogRocket .

“What LogRocket does is we capture a recording in real time of all the user activity so the developer on the other end can replay exactly what went wrong and troubleshoot issues faster,” Arbesfeld explained.

Screenshot: LogRocket

The tool works by capturing low-resolution images of troublesome activity of each user and putting them together in a video. When there is an error or problem, the engineer can review the video and watch exactly what the user was doing when he or she encountered an error, allowing them to identify and resolve the problem much more quickly.

Arbesfeld said the company doesn’t have a video storage issue because it concentrates on capturing problems instead of the entire experience. “We’re looking at frustrating moments of the user, so that we can focus on the problem areas,” he explained.

Customers can access the data in the LogRocket dashboard, or it can be incorporated into help desk software like Zendesk. The company is growing quickly with 25 employees and 500 customers in just 18 months since inception, including Reddit, Ikea, CarGrus and Bloomberg.

As for the funding, they see this as the start of a long-term journey. “Our goal is to get out to a much wider audience and build a mature sales and marketing organization,” Arbesfeld said. He sees a future with thousands of customers and ambitious revenue goals. “We want to continue to use the data we have to offer more proactive insights into highest impact problems,” he said

Let’s block ads! (Why?)

Link to original source

Salesforce update brings AI and Quip to customer service chat experience

When Salesforce introduced Einstein, its artificial intelligence platform in 2016, it was laying the ground work for artificial intelligence underpinnings across the platform. Since then the company has introduced a variety of AI enhancements to the Salesforce product family. Today, customer service got some AI updates.

The goal of any customer service interaction is to get the customer answers as quickly as possible. Many users opt to use chat over phone, and Salesforce has added some AI features to help customer service agents get answers more quickly in the chat interface. (The company hinted that phone customer service enhancements are coming.)

For starters, Salesforce is using machine learning to deliver article recommendations, response recommendations and next best actions to the agent in real time as they interact with customers.  “With Einstein article recommendations, we can use machine learning on past cases and we can look at how articles were used to successfully solve similar cases in the past, and serve up the best article right in the console to help the agent with the case,” Martha Walchuk, senior director of product marketing for Salesforce Service Cloud explained.

Salesforce Service Console. Screenshot: Salesforce

The company is also using similar technology to provide response recommendations, which the agent can copy and paste into the chat to speed up the time to response. Before the interaction ends, the company can offer the next best action (which was announced last year) based on the conversation. For example, they could offer related information, an upsell recommendation or whatever type of action the customer defines.

Salesforce is also using machine learning to help route each person to the most appropriate customer service rep. As Salesforce describes it, this feature uses machine learning to filter cases and route them to the right queue or agent automatically, based on defined criteria such as best qualified agent or past outcomes.

Finally, the company is embedding Quip, the company it acquired in 2016 for $750 million, into the customer service console to allow agents to communicate with one another to find answers to difficult problems. That not only helps solve the issues faster, the conversations themselves become part of the knowledge base, which Salesforce can draw upon to help teach the machine learning algorithms about the correct responses to commonly asked questions in the future.

As with the Oracle AI announcement this morning, this use of artificial intelligence in sales, service and marketing is part of a much broader industry trend, as these companies try to inject intelligence into workflows to make them run more efficiently.

Let’s block ads! (Why?)

Link to original source

Nvidia announces its next-gen RTX pods with up to 1,280 GPUs

Nvidia wants to be a cloud powerhouse. While its history may be in graphics cards for gaming enthusiasts, its recent focus has been on its data center GPUs for AI, machine learning inference, inference and visualization. Today, at its GTC conference, the company announced its latest RTX server configuration for Hollywood studios and others who need to quickly generate visual content.

A full RTX server pod can support up to 1,280 Turing GPUs on 32 RTX blade servers. That’s 40 GPUs per server, with each server taking up an 8U space. The GPUs here are Quadro RTX 4000 or 6000 GPUs, depending on the configuration.

NVIDIA RTX Servers — which include fully optimized software stacks available for Optix RTX rendering, gaming, VR and AR, and professional visualization applications — can now deliver cinematic-quality graphics enhanced by ray tracing for far less than just the cost of electricity for a CPU-based rendering cluster with the same performance,” the company notes in today’s announcement.

All of this power can be shared by multiple users and the backend storage and networking interconnect is powered by technology from Mellanox, which Nvidia bought earlier this month for $6.9 billion. That acquisition and today’s news clearly show how important the data center has become for Nvidia’s future.

System makers like Dell, HP, Lenovo, Asus and Supermicro will offer RTX servers to their customers, all of which have been validated by Nvidia and support the company’s software tools for managing the workloads that run on them.

Nvidia also stresses that these servers would work great for running AR and VR applications at the edge and then serving the visuals to clients over 5G networks. That’s a few too many buzzwords, I think, and consumer interest in AR and VR remains questionable, while 5G networks remain far from mainstream, too. Still, there’s a role for these servers in powering cloud gaming services, for example.

Let’s block ads! (Why?)

Link to original source