Microsoft acquires FSLogix to enhance Office 365 virtual desktop experience

Back in September, Microsoft announced a virtual desktop solution that lets customers run Office 365 and Windows 10 in the cloud. They mentioned several partners in the announcement that were working on solutions with them. One of those was FSLogix, a Georgia virtual desktop startup. Today, Microsoft announced it has acquired FSLogix. It did not share the purchase price.

“FSLogix is a next-generation app-provisioning platform that reduces the resources, time and labor required to support virtualization,” Brad Anderson, corporate VP for Microsoft Office 365 and Julia White, corporate VP for Microsoft Azure,  href=””>wrote in a joint blog post today.

When Microsoft made the virtual desktop announcement in September they named Citrix, CloudJumper, Lakeside Software, Liquidware, People Tech Group, ThinPrint and FSLogix as partners working on solutions. Apparently, the company decided it wanted to own one of those experiences and acquired FSLogix.

Microsoft believes by incorporating the FSLogix solution, it will provide a better virtual desktop experience for its customers by enabling better performance and faster load times, especially for Office 365 ProPlus customers.

Randy Cook, founder and CTO at FSLogix, said the acquisition made sense given how well the two companies have worked together over the years. “From the beginning, in working closely with several teams at Microsoft, we recognized that our missions were completely aligned. Both FSLogix and Microsoft are dedicated to providing the absolute best experience for companies choosing to deploy virtual desktops,” Cook wrote in a blog post announcing the acquisition.

Lots of companies have what are essentially dumb terminals running just the tools each employee needs, rather than a fully functioning standalone PC. Citrix has made a living offering these services. When employees come in to start the day, they sign in with their credentials and they get a virtual desktop with the tools they need to do their jobs. Microsoft’s version of this involves Office 365 and Windows 10 running on Azure.

FSLogix was founded in 2013 and has raised more than $10 million, according to data on Crunchbase. Today’s acquisition, which has already closed according to Microsoft, comes on the heels of last week’s announcement that the company was buying Xoxco, an Austin-based developer shop with experience building conversational bots.

Let’s block ads! (Why?)

Link to original source

Amazon Comprehend adds customized language lists to machine learning tool

Last year Amazon announced Comprehend, a natural language processing tool to help companies extract common words and phrases from a corpus of information. Today, a week ahead of its Re:invent customer conference, Amazon announced an enhancement to Comprehend that allows developers to build lists of specialized words and phrases without machine learning domain knowledge.

“Today we are excited to bring new customization features to Comprehend, which allow developers to extend Comprehend to identify natural language terms and classify text which is specialized to their team, business or industry,” Matt Wood, GM for deep learning and AI wrote in a blog post announcing the enhancement.

The key aspect of this is that Amazon is handling all of the complexity, allowing developers to add customized lists without having deep machine learning or natural language processing background. “Under the hood, Comprehend will do the heavy lifting to build, train, and host the customized machine learning models, and make those models available through a private API,” Wood wrote.

This involves two pieces. First, developers define a list of custom entities. This could be something like legal language at a law firm or a list of part numbers at an automobile company. All the developer needs to do is expose a list of these entities. Amazon learns to identify the customized language and builds a private, customized model based on the list.

The second piece involves customized classifying. Once you have the language, you can begin to build logical lists where the terms appear. “Through as few as 50 examples, Comprehend will automatically train a custom classification model that can be used to categorize all your documents. You could group support emails by department, social media posts by product, or analyst reports by business unit,” Wood wrote. You can see how this could be useful to take these items after they have been extracted and categorized, and move them through a workflow to the appropriate personnel or for further use programmatically by an application.

Amazon is providing a way to build customized machine learning models while it takes care of the details behind the scenes. At their best, cloud companies simplify the complex and provide access to sets of services that might otherwise be too difficult for many developers to achieve on their own. Comprehend is trying to offer a way to build customized models without having any machine learning knowledge whatsoever.

The new Comprehend features are generally available starting today.

Let’s block ads! (Why?)

Link to original source

Apple and Microsoft are fixing a serious iCloud bug in Windows 10

Jaap Arriens/NurPhoto via Getty Images

The return of Windows 10’s October update wasn’t welcome news for everyone. Microsoft says it’s “working with Apple” to solve an iCloud for Windows bug that creates problems updating or syncing shared photo albums when using the latest Windows release. Suffice it to say that’s a serious problem if you’re interested in seamless access to your photos across your devices.

It’s not certain when you can expect a solution, but the two companies aren’t taking any chances in the meantime. It’s blocking PCs with iCloud for Windows from installing the latest Windows 10 update, and those who try to install it after the fact will get a warning that Windows doesn’t support that version of iCloud. Like it or not, you may have to forego iCloud or the Windows update for a while.

Let’s block ads! (Why?)

Link to original source

Google looks to former Oracle exec Thomas Kurian to move cloud business along

Diane Greene announced on Friday that she was stepping down after three years running Google’s cloud business. She will stay on until the first of the year to help her successor, Thomas Kurian in the transition. He left Oracle at the end of September after more than 20 years with the company, and is charged with making Google’s cloud division more enterprise-friendly, a goal that has oddly eluded the company.

Greene was brought on board in 2015 to bring some order and enterprise savvy to the company’s cloud business. While she did help move them along that path, and grew the cloud business, it simply hasn’t been enough. There have been rumblings for months that Greene’s time was coming to an end.

So the torch is being passed to Kurian, a man who spent over two decades at a company that might be the exact opposite of Google. He ran product at Oracle, a traditional enterprise software company. Oracle itself has struggled to make the transition to a cloud company, but Bloomberg reported in September that one of the reasons Kurian was taking a leave of absence at the time was a difference of opinion with Chairman Larry Ellison over cloud strategy. According to the report, Kurian wanted to make Oracle’s software available on public clouds like AWS and Azure (and Google Cloud). Ellison apparently didn’t agree and a couple of weeks later Kurian announced he was moving on.

Even though Kurian’s background might not seem to be perfectly aligned with Google, it’s important to keep in mind that his thinking was evolving. He was also in charge of thousands of products and helped champion Oracle’s move to the cloud. He has experience successfully nurturing products enterprises have wanted, and perhaps that’s the kind of knowledge Google was looking for in its next cloud leader.

Ray Wang, founder and principal analyst at Constellation Research says Google still needs to learn to support the enterprise, and he believes Kurian is the right person to help the company get there. “Kurian knows what’s required to make a cloud company work for enterprise customers,” Wang said.

If he’s right, perhaps an old-school enterprise executive is just what Google requires to turn its Cloud division into an enterprise-friendly powerhouse. Greene has always maintained that it was still early days for the cloud and Google had plenty of time to capture part of the untapped market, a point she reiterated in her blog post on Friday. “The cloud space is early and there is an enormous opportunity ahead,” she wrote.

She may be right about that, but marketshare positions seem to be hardening. AWS, which was first to market, has an enormous marketshare lead with over 30 percent by most accounts. Microsoft is the only company with the market strength at the moment to give them a run for their money and the only other company with double digit market share numbers. In fact, Amazon has a larger marketshare than the next four companies combined, according to data from Synergy Research.

While Google is always mentioned in the Big 3 cloud companies with AWS and Microsoft, with around $4 billion revenue a year, it has a long way to go to get to the level of these other companies. Despite Greene’s assertions, time could be running out to make a run. Perhaps Kurian is the person to push the company to grab some of that untapped market as companies move more workloads to the cloud. At this point, Google is counting on him to do just that.

Let’s block ads! (Why?)

Link to original source

How cities can fix tourism hell

A steep and rapid rise in tourism has left behind a wake of economic and environmental damage in cities around the globe. In response, governments have been responding with policies that attempt to limit the number of visitors who come in. We’ve decided to spare you from any more Amazon HQ2 talk and instead focus on why cities should shy away from reactive policies and should instead utilize their growing set of technological capabilities to change how they manage tourists within city lines.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts:

Well – it didn’t take long for the phrase “overtourism” to get overused. The popular buzzword describes the influx of tourists who flood a location and damage the quality of life for full-time residents. The term has become such a common topic of debate in recent months that it was even featured this past week on Oxford Dictionaries’ annual “Words of the Year” list.

But the expression’s frequent appearance in headlines highlights the growing number of cities plagued by the externalities from rising tourism.

In the last decade, travel has become easier and more accessible than ever. Low-cost ticketing services and apartment-rental companies have brought down the costs of transportation and lodging; the ubiquity of social media has ticked up tourism marketing efforts and consumer demand for travel; economic globalization has increased the frequency of business travel; and rising incomes in emerging markets have opened up travel to many who previously couldn’t afford it.

Now, unsurprisingly, tourism has spiked dramatically, with the UN’s World Tourism Organization (UNWTO) reporting that tourist arrivals grew an estimated 7% in 2017 – materially above the roughly 4% seen consistently since 2010. The sudden and rapid increase of visitors has left many cities and residents overwhelmed, dealing with issues like overcrowding, pollution, and rising costs of goods and housing.

The problems cities face with rising tourism are only set to intensify. And while it’s hard for me to imagine when walking shoulder-to-shoulder with strangers on tight New York streets, the number of tourists in major cities like these can very possibly double over the next 10 to 15 years.

China and other emerging markets have already seen significant growth in the middle-class and have long runway ahead. According to the Organization for Economic Co-operation and Development (OECD), the global middle class is expected to rise from the 1.8 billion observed in 2009 to 3.2 billion by 2020 and 4.9 billion by 2030. The new money brings with it a new wave of travelers looking to catch a selfie with the Eiffel Tower, with the UNWTO forecasting international tourist arrivals to increase from 1.3 billion to 1.8 billion by 2030.

With a growing sense of urgency around managing their guests, more and more cities have been implementing policies focused on limiting the number of tourists that visit altogether by imposing hard visitor limits, tourist taxes or otherwise.

But as the UNWTO points out in its report on overtourism, the negative effects from inflating tourism are not solely tied to the number of visitors in a city but are also largely driven by touristy seasonality, tourist behavior, the behavior of the resident population, and the functionality of city infrastructure. We’ve seen cities with few tourists, for example, have experienced similar issues to those experienced in cities with millions.

While many cities have focused on reactive policies that are meant to quell tourism, they should instead focus on technology-driven solutions that can help manage tourist behavior, create structural changes to city tourism infrastructure, while allowing cities to continue capturing the significant revenue stream that tourism provides.


Yes, cities are faced with the headwind of a growing tourism population, but city policymakers also benefit from the tailwind of having more technological capabilities than their predecessors. With the rise of smart city and Internet of Things (IoT) initiatives, many cities are equipped with tools such as connected infrastructure, lidar-sensors, high-quality broadband, and troves of data that make it easier to manage issues around congestion, infrastructure, or otherwise.

On the congestion side, we have already seen companies using geo-tracking and other smart city technologies to manage congestion around event venues, roads, and stores. Cities can apply the same strategies to manage the flow of tourist and resident movement.

And while you can’t necessarily prevent people from people visiting the Louvre or the Coliseum, cities are using a variety of methods to incentivize the use of less congested space or disperse the times in which people flock to highly-trafficked locations by using tools such as real-time congestion notifications, data-driven ticketing schedules for museums and landmarks, or digitally-guided tours through uncontested routes.

Companies and municipalities in cities like London and Antwerp are already working on using tourist movement tracking to manage crowds and help notify and guide tourists to certain locations at the most efficient times. Other cities have developed augmented reality tours that can guide tourists in real-time to less congested spaces by dynamically adjusting their routes.

A number of startups are also working with cities to use collected movement data to help reshape infrastructure to better fit the long-term needs and changing demographics of its occupants. Companies like Stae or Calthorpe Analytics use analytics on movement, permitting, business trends or otherwise to help cities implement more effective zoning and land use plans. City planners can use the same technology to help effectively design street structure to increase usable sidewalk space and to better allocate zoning for hotels, retail or other tourist-friendly attractions.

Focusing counter-overtourism efforts on smart city technologies can help adjust the behavior and movement of travelers in a city through a number of avenues, in a way tourist caps or tourist taxes do not.

And at the end of the day, tourism is one of the largest sources of city income, meaning it also plays a vital role in determining the budgets cities have to plow back into transit, roads, digital infrastructure, the energy grid, and other pain points that plague residents and travelers alike year-round. And by disallowing or disincentivizing tourism, cities can lose valuable capital for infrastructure, which can subsequently exacerbate congestion problems in the long-run.

Some cities have justified tourist taxes by saying the revenue stream would be invested into improving the issues overtourism has caused. But daily or upon-entry tourist taxes we’ve seen so far haven’t come close to offsetting the lost revenue from disincentivized tourists, who at the start of 2017 spent all-in nearly $700 per day in the US on transportation, souvenirs and other expenses according to the U.S. National Travel and Tourism Office.

In 2017, international tourism alone drove to $1.6 trillion in earnings and in 2016, travel & tourism accounted for roughly 1 in 10 jobs in the global economy according to the World Travel and Tourism Council. And the benefits of travel are not only economic, with cross-border tourism promoting transfers of culture, knowledge and experience.

But to be clear, I don’t mean to say smart city technology initiatives alone are going to solve overtourism. The significant wave of growth in the number of global travelers is a serious challenge and many of the issues that result from spiking tourism, like housing affordability, are incredibly complex and come down to more than just data. However, I do believe cities should be focused less on tourist reduction and more on solutions that enable tourist management.

Utilizing and allocating more resources to smart city technologies can not only more effectively and structurally limit the negative impacts from overtourism, but it also allows cities to benefit from a significant and high growth tourism revenue stream. Cities can then create a virtuous cycle of reinvestment where they plow investment back into its infrastructure to better manage visitor growth, resident growth, and quality of life over the long-term. Cities can have their cake and eat it too.

Let’s block ads! (Why?)

Link to original source

Former Oracle exec Thomas Kurian to replace Diane Greene as head of Google Cloud

Diane Greene announced in a blog post today that she would be stepping down as CEO of Google Cloud and will be helping transition former Oracle executive Thomas Kurian to take over early next year.

Greene took over the position almost exactly three years ago when Google bought Bebop, the startup she was running. The thinking at the time was that the company needed someone with a strong enterprise background and Greene, who helped launch VMware, certainly had the enterprise credentials they were looking for.

In the blog post announcing the transition, she trumpeted her accomplishments. “The Google Cloud team has accomplished amazing things over the last three years, and I’m proud to have been a part of this transformative work. We have moved Google Cloud from having only two significant customers and a collection of startups to having major Fortune 1000 enterprises betting their future on Google Cloud, something we should accept as a great compliment as well as a huge responsibility,” she wrote.

The company had a disparate set of cloud services when she took over, and one of the first things Greene did was to put them all under a single Google Cloud umbrella. “We’ve built a strong business together — set up by integrating sales, marketing, Google Cloud Platform (GCP), and Google Apps/G Suite into what is now called Google Cloud,” she wrote in the blog post.

As for Kurian, he stepped down as president of product development at Oracle at the end of September. He had announced a leave of absence earlier in the month before making the exit permanent. Like Greene before him, he brings a level of enterprise street cred, which the company needs as it continues to try to grow its cloud business.

After three years with Greene at the helm, Google, which has tried to position itself as the more open cloud alternative to Microsoft and Amazon, has still struggled to gain market share against its competitors, remaining under 10 percent consistently throughout Greene’s tenure.

As Synergy’s John Dinsdale told TechCrunch in an article on Google Cloud’s strategy in 2017, the company had not been particularly strong in the enterprise to that point. “The issues of course are around it being late to market and the perception that Google isn’t strong in the enterprise. Until recently Google never gave the impression (through words or deeds) that cloud services were really important to it. It is now trying to make up for lost ground, but AWS and Microsoft are streets ahead,” Dinsdale explained at the time. Greene was trying hard to change that perception.

Google has not released many revenue numbers related to the cloud, but in February it indicated they were earning a billion dollars a quarter, a number that Greene felt put Google in elite company. Amazon and Google were reporting numbers like that for a quarter at the time. Google stopped reporting cloud revenue after that report.

Regardless, the company will turn to Kurian to continue growing those numbers now. “I will continue as CEO through January, working with Thomas to ensure a smooth transition. I will remain a Director on the Alphabet board,” Greene wrote in her blog post.

Interestingly enough, Oracle has struggled with its own transition to the cloud. Kurian gets a company that was born in the cloud, rather than one that has made a transition from on-prem software and hardware to one solely in the cloud. It will be up to him to steer Google Cloud moving forward.

Let’s block ads! (Why?)

Link to original source

Amazon launches ‘Alexa-hosted skills’ for voice app developers

Amazon on Thursday launched a new service aimed at Alexa developers that automatically provisions and helps them to manage a set of AWS cloud resources for their Alexa skill’s backend service. The service is intended to help developers speed the time it takes to launch their skills, by allowing them to focus on their skills’ design and unique features, and not the cloud services they need.

“Previously you had to provision and manage this back-end on your own with a cloud endpoint, resources for media storage, and a code repository,” explained Amazon on its company blog post, announcing the new service. “Alexa-hosted skills offer an easier option. It automatically provisions and hosts an AWS Lambda endpoint, Amazon S3 media storage, and a table for session persistence so that you can get started quickly with your latest project.”

Developers will also be able to use a new code editor in the ASK Developer Console to edit their code, while AWS Lamdba will handle routing the skill request, executing the skill’s code, and managing the skill’s compute resources.

Amazon S3, meanwhile, can be used for things the skill needs to store – like media files, such as the images being used for the skill’s Echo Show, Echo Spot and Fire TV versions.

The service comes at a time when Amazon Alexa and Google Home are in a race to grab market share – and mind share – in the smart speaker industry. A lot of this will come down to how useful these devices are for customers – and well-designed skills are a part of that.

Smart speaker adoption is growing fast in the U.S., having recently reaching 57.8 million adults, according to a report from Voicebot. But in terms of third-party development of voice apps, Amazon leads Google Home, having passed 40,000 U.S. skills in September.

Amazon says Alexa-hosted skills are available to developers in all Alexa locales. Developers can apply to join the preview here.

Let’s block ads! (Why?)

Link to original source

Uber joins Linux Foundation, cementing commitment to open-source tools

Uber announced today at the 2018 Uber Open Summit that it was joining the Linux Foundation as a Gold Member, making a firm commitment to using and contributing to open-source tools.

Uber CTO Thuan Pham sees the Linux Foundation as a place for companies like his to nurture and develop open-source projects. “Open source technology is the backbone of many of Uber’s core services and as we continue to mature, these solutions will become ever more important,” he said in a blog post announcing the partnership.

What’s surprising is not that they joined, but that it took so long. Uber has been long known for making use of open source in its core tools, working on over 320 open-source projects and repositories from 1,500 contributors involving over 70,000 commits, according to data provided by the company.

“Uber has made significant investments in shared software development and community collaboration through open source over the years, including contributing the popular open-source project Jaeger, a distributed tracing system, to the Linux Foundation’s Cloud Native Computing Foundation in 2017,” an Uber spokesperson told TechCrunch.

Linux Foundation Executive Director Jim Zemlin was certainly happy to welcome Uber into the fold. “Their expertise will be instrumental for our projects as we continue to advance open solutions for cloud native technologies, deep learning, data visualization and other technologies that are critical to businesses today,” Zemlin said in a statement.

The Linux Foundation is an umbrella group supporting myriad open-source projects and providing an organizational structure for companies like Uber to contribute and maintain open-source projects. It houses sub-organizations like the Cloud Native Computing Foundation, Cloud Foundry Foundation, The Hyperledger Foundation and the Linux operating system, among others.

These open-source projects provide a base on top of which contributing companies and the community of developers can add value if they wish and build a business. Others like Uber, which uses these technologies to fuel their backend systems, won’t sell additional services, but can capitalize on the openness to help fuel their own requirements in the future, while also acting as a contributor to give as well as take.

Let’s block ads! (Why?)

Link to original source

OpenStack regroups

Only a few years ago, OpenStack was the hottest open-source project around, with a bustling startup ecosystem to boot. The project, which gives enterprises the tools to run the equivalent of AWS in their own private data centers, ran into trouble as it tried to tackle too many individual projects at the same time and enterprises took longer than expected to adopt it. That meant many a startup floundered or was acquired before it was able to gain traction while the nonprofit foundation that manages the project started to scale back its big tent approach and refocused on its core services.

The height of the OpenStack hype was around late 2014, where even small startups used their copious venture funding to host lavish parties at the project’s conferences. But by 2016, it was deep in the trough of disillusionment as a number of major backers like HPE, Cisco and IBM started to sell off their OpenStack assets or reduce their involvement in the project, and some of the startups in the ecosystem called it quits. But something interesting happened after that. The OpenStack project moved along, fixed many of its problems and adapted to a changing world where everybody wants to talk about containers and edge computing.

Today, it’s a stable system that’s the de facto standard for running private clouds. There’s very little hype, but now there’s lots of actual usage. Only a few years ago, there was plenty of hype, but you would’ve been hard pressed to find any major company that ran a significant OpenStack deployment in production. People who made an early bet on OpenStack and seemed miserable a year ago now have a bit of bounce in their step again. And lately, I’ve heard from a number of vendors, including the likes of Suse, that tell me they make more money from OpenStack now than at any time during the hype phase. Indeed, according to the Foundation’s latest stats, OpenStack users now use the system to manage well over 10 million cores of compute power.

“There is a perception versus reality thing,” OpenStack Foundation CTO Mark Collier told me. “We’re past the peak of the hype cycle for this particular technology in people’s minds. And so there’s this interesting paradox which is that adoption tends to go up as hype goes down.”

Stable systems are boring, though, and so it’s maybe no surprise that the OpenStack Foundation decided that it’s time to redirect some of its community’s energy to tackle some new problems, all with a focus on open source infrastructure problems, but without the requirement that they are tied directly to the actual OpenStack project. That’s a move that started more than a year ago and that’s now starting to take concrete form.

At the OpenStack Summit in Berlin this week, the OpenStack Foundation announced that this would be the last of its bi-annual conferences under this name. Going forward, it’ll be the Open Infrastructure Summit. For some, that was surely a major surprise — though the Foundation had been laying the groundwork for this for a while now. Don’t expect the Foundation itself to change its name, though. Just like the Linux Foundation is keeping its name even though it now helps manage a plethora of other foundations, the OpenStack Foundation isn’t about to spend a lot of money and energy rebranding.

What is changing, though, is the nature of what the Foundation is doing. A board meeting earlier the week made official the Foundation’s process for adopting new projects outside of the core OpenStack project. There’s now a process for adding so-called “pilot projects” and fostering them for a minimum of 18 months. With this, the Foundation is also making it clear that its focus for these projects will be on continuous integration and continuous delivery (CI/CD), container infrastructure, edge computing, data center and artificial intelligence and machine learning. There are currently four of these pilot projects: AirshipKata ContainersStarlingX and Zuul.

None of these projects are new — and all of them are in different stages of development — but it’s a start for OpenStack to spread its wings. The idea here is not to manage dozens of projects, though — or to increase the Foundation’s revenue by setting up new foundations inside the current framework. “We’re not trying to have dozens of projects,” OpenStack’s VP of marketing and community services Lauren Sell explained. “It’s really this tighter, more focused scope around people who run or manage infrastructure.” There’s also no additional bureaucracy — at least for the time being. “We have not created new boards of directors or foundations for each project,” Collier added. “We’re also not trying to house a huge amount of projects. We have a specific form of open collaboration that we feel has really proven it works with OpenStack. And a few people started coming to us going: we’d like to use that same model.”

The Foundation members are the first to acknowledge that there are still details that need to be worked out — and that will take a while.

At the same time, though, OpenStack — the actual technology — isn’t going anywhere, and will remain core to the Foundation’s efforts. “We said very clearly this week that open infrastructure starts with OpenStack, so it’s not separate from it. OpenStack is the anchor tenant of the whole concept,” Collier said. “The core OpenStack community is really thinking about how OpenStack is deployed and it’s always deployed with other technologies and some of these can now be more easily collaborated with and integrated with because they’re happening at the same events.” Sell echoed this. “We’re not moving away from OpenStack,” she told me. “This is not separate from OpenStack. All that we are doing is actually meant to make OpenStack better.”

Canonical founder Mark Shuttleworth has a slightly different opinion about the current direction of the Foundation, though. He seems to be worried that the focus on multiple projects will take away from the core OpenStack project. “OpenStack suffered when there wasn’t clarity about the mission,” he said. “And I hope that in broadening the scope of what they as the Foundation want to worry about won’t confuse people about OpenStack.” Today, he believes, OpenStack is in a good spot because it delivers on a specific set of promises. “I would really like to see the Foundation employ the key contributors to OpenStack so that the heart of OpenStack had long-term stability that wasn’t subject to a popularity contest every six months,” he added.

There can be no doubt that today, a number of companies are back to doubling down on OpenStack as their core focus. Mirantis, for example, was one of the earliest and best-known (and well-funded) backers of the project. A year ago, it looked like the company was going to switch its focus to application delivery and away from infrastructure. Today, it’s still building out that side of its business, but as co-founders Adrian Ionel and Boris Renski told me, the OpenStack side of its business is growing, with both new customers coming on board and existing customers like Adobe and Apple expanding their deployments. “Something changed,” Renski. He attributed that change to the Foundation’s focus on edge computing. “I think as the edge is becoming more relevant and more of a real thing, it’s actually possible to productize OpenStack much better.”

Interestingly, much of the interest in OpenStack today is in China. The U.S. market for OpenStack, on the other hand, is still growing, but at a far slower pace. Collier attributed the success in China to the fact that the country has massive infrastructure need and that it has embraced open source. “If you have those two criteria, you’re going to run OpenStack like crazy,” he said. “What else are you going to run?” He did add, though, that the Chinese government has also gotten involved in evangelizing technology standards and that many a group in the government has identified OpenStack as one of those.

While OpenStack feels like it’s back on track, though, this move to broaden the Foundation’s scope also sets it up for some competition with the Linux Foundation, and especially the Cloud Native Computing Foundation, which it manages. A number of pundits at the event were surprised that Ceph, an open-source storage service that’s at the core of many major OpenStack deployments, as well as container-based platforms, formed its own foundation under the Linux Foundation earlier this week.

Sell and Collier don’t see it that way, though. “I think it’s a strong statement that [the Ceph Foundation] had a really successful event here and made the announcement from here,” Sell said. “I think that shows that some of these foundation lines and where projects live is not as significant as you may think it is.”

“I think that what matters to us is that the people from the Ceph community feel welcome here,” Collier added. “And all we really want to do is have the people collaborate and have the technology work together. Where the actual pieces of software that are put together ultimately live in terms of foundations, it’s not that important.”

He also noted that he never saw a user who cared whether a project lived in the Apache Foundation or the Linux Foundation, for example. What matters instead is the community around a project. He did acknowledge, though, that the foundations that manage projects obviously matter — and that this is important enough that the OpenStack Foundation is changing to accommodate the projects that would be a good fit under its auspices.

No matter how this will eventually play out, though, it’s been interesting to watch the journey of OpenStack over the last few years. When I first really started paying attention to it, it was at the top of the hype cycle. Those parties were fun, but OpenStack hadn’t proven itself. Then, the approach of bringing in lots of projects started to muddle the project’s mission and, at the same, the startup ecosystem began to flounder as enterprise adoption materialized at a glacial pace. Some companies, like Mirantis, had raised enough money to hang in there, though, and those are now finally reaping the rewards. What didn’t go away, though, was the community that built the project, and many of the large corporate backers. In part, that’s because more use cases for private data centers have emerged. Telcos, banks and others are betting on those — and there are simply no real alternatives to OpenStack for them.

Today, OpenStack itself is frankly boring. There are no major new features. Nobody is pumping a lot of money into the OpenStack ecosystem. But the enterprise world is just fine with that, because those companies aren’t betting their business on OpenStack because it’s the cool new thing. At the same time, the OpenStack Foundation is also reinventing itself to react to the needs of its community. And as new projects emerge, maybe the hype cycle will start over again.

Let’s block ads! (Why?)

Link to original source

Propel accelerates with $18M Series B to manage product lifecycle

We hear so much about managing the customer relationship, but companies have to manage the products they sell too. Propel, a Santa Clara startup, is taking a modern cloud approach to the problem, and today it landed an $18 million Series B investment.

The round was led by Norwest Venture Partners. Previous investors Cloud Apps Capital Partners, Salesforce Ventures, and Signalfire also participated. Today’s investment brings the total raised to over $28 million.

“We are focused on helping companies design and launch products, based on how you go through the life cycle of a product from concept to design to make, model, sell, service where everybody in a company gets involved in product processes at different points in time,” company co-founder and CEO Ray Hein told TechCrunch.

Hein says the company has three core products to help customers track products through their life. For starters, there is the product lifecycle management tool (PLM), used by engineering and manufacturing. Next, they have product information management for sales and marketing and finally they have service personnel using the quality management component.

The company is built on top of the Salesforce platform, which could account for Salesforce Ventures interest in the startup. While Propel looks purely at the product, Salesforce is more interested in the customer, whether from a sales, service or marketing perspective.

These same employees need to understand the products they are developing and selling and that is where Propel comes into play. For instance, when sales people are filling out an order, they need access to the product catalogue to get the right numbers or marketing needs to understand the products they are adding to an online store in an eCommerce environment.

Traditional PLM tools from companies like SAP and Oracle are on-prem or have been converted from on-prem to cloud services. Propel was born in the cloud and Sean Jacobsohn, partner at Norwest Venture Partners, who will be joining the Propel board, sees this as a key differentiator for the startup.

“With Propel’s solution, companies can get up and running faster than with on-premise alternatives and pivot products in a matter of seconds based on real-time feedback gathered from marketing, engineering, sales, customers and the entire supply chain,” Jacobsohn said in a statement.

The company was founded in 2015. It currently has 35 employees, which Hein intends to boost to 50 in the coming months flush with these new funds.

Let’s block ads! (Why?)

Link to original source