WayRay, a Zurich-based AR company that makes holograms to augment driving, just closed an $80M C-round. The fundraising was led by Porsche and included Hyundai and Alibaba Group.
That brings the seven-year-old company’s total funding to over $100M on a $500M valuation. The company is actually calling its shot with a projected $1B valuation by 2019.
WayRay’s technology is the closest melding of video games and on-road driving I’ve seen. If that’s a frightening prospect, you can take some comfort in the fact that safety is one of its primary pitches.
In essence, WayRay’s projector, which is smaller than most aftermarket heads up display (HUD) units, turns the entire windshield into an AR hologram. Standard instrument information, such as MPG and speed, are of course displayed, but the system also reads roads and highlights lanes with Tron-like lighting, projects navigation maps and instructions, and issues warnings about potential hazards, such as pedestrians.
It’s a philosophy of technology integration in an AR ecosystem often focused on paradigm-shifting releases. Much of the lackluster adoption of AR/VR to date can be attributed to the way technology developers have tried to pitch novel new systems. Adoption is likelier to happen as mixed reality technologies integrate into existing consumer experiences.
In fact, the thing that’s so surprising about WayRay’s tech, and likely the reason it’s getting the vote of confidence from legacy automotive players, is that it integrates seamlessly with the existing driving experience. The AR display coalesces a number of technologies already found in the cockpit of modern cars without introducing new screens or UX demands.
There’s virtually no learning curve, to pun badly.
Porsche previously teamed up with WayRay for Startup Autobahn, a showcase of European automotive technology, where WayRay took Grand Prize for top automotive startup. WayRay plans to start a pilot production line in Germany.
Hyundai seems as interested in the future applications of WayRay’s hologram AR as in its driving applications.
“The Hyundai-WayRay collaboration will help us establish a brand new eco-system that harnesses AR technology to enhance not only navigation systems but establish an AR platform for smart cities and smart buildings, which are Hyundai Motor Group’s new business interests,” said Dr. Youngcho Chi, CIO and EVP of Hyundai Motor Group.
Though focused initially on automotive applications, WayRay has plans to expand into sectors like construction and home electronics.
HANGZHOU, CHINA–Alibaba Group has formally established a semiconductor business to produce its own artificial intelligence (AI) as well as unveiled plans to develop quantum processors.
Driven by the Chinese vendor’s research and development (R&D) arm Damo Academy, these efforts would see the launch of Alibaba’s first in-house developed AI chip in the second half of next year. Called AliNPU, the new AI chip had the potential to support technologies used in autonomous driving, smart cities, and smart logistics, said Alibaba at its annual flagship computing conference held here Tuesday.
Chinese internet giant talks up new product releases such as Anti-Bot Service and its datacentre footprint in Asia-Pacific as key differentiators against cloud rivals, Amazon Web Services and Google. Its machine learning specialist also urges need for governments looking to build smart cities, such as Singapore, to ensure its citizens benefit from such initiatives.
It also set up a new semiconductor subsidiary, called Pingtouge, which it said would focus on customised AI chips and embedded processors. These efforts would support Alibaba’s plans to expand its cloud and Internet of Things (IoT) businesses as well as drive the development of industry-specific applications, it said.
Alibaba in April acquired integrated circuit design vendor, Hangzhou C-Sky Microsystems, describing the move as “an important step” in boosting its chip-making capabilities. The move also would marry both companies’ R&D strengths and was in line with China’s urging for the country to become self-reliance in the development of key technologies.
It then had revealed initial plans for AliNPU, which it said would be designed to process AI tasks such as image and video analysis.
Alibaba CTO Jeff Zhang said at the conference: “Moving ahead, we are confident our advantages in algorithm, data intelligence, computing power, and domain knowledge on the back of Alibaba’s diverse ecosystem will put us at a unique position to lead real technology breakthroughs in disruptive areas, such as quantum and chip technology.”
In laying out its five-year roadmap for Damo Academy, Alibaba said the R&D arm also would be developing “high-precision, multiple-qubit superconducting quantum processors” and would continue to push development in the sector. These would include quantum-classical systems to offer utility-based quantum compute power that could be delivered over the cloud.
Zhang added that the development of both software as well as hardware was necessary to provide the computing necessary to more quickly analyse data and at a low cost.
Launched last year, Damo Academy currently has more than 300 researchers across eight cities worldwide, focusing on five technology areas including fintech, robotics, and quantum computing. Its partners include Nanyang Technological University of Singapore and Stanford University.
Google also offers its own AI chip, first announced in 2016 and currently in its third generation. The Tensor Processing Unit (TPU) 3.0 is touted to be eight-times more powerful than its predecessor.
Based in Singapore, Eileen Yu reported for ZDNet from The Computing Conference 2018 in Hangzhou, China, on the invitation of Alibaba Group.
Now, following this Monday’s release of iOS 12, Google has updated its Maps app for iOS to version 5.0, so that iPhone owners can use it on infotainment systems.
Since the release of CarPlay in 2014, the only navigational option on CarPlay has been Apple’s own Maps app. Finally, four years on, iPhone owners can now choose to use Google Maps, which is preferred by many iPhone users.
To get the new Google Maps on an iPhone, you’ll need to install iOS 12 and then update the Google Maps app.
Although Apple’s CarPlay page also lists Waze as an additional navigation option, the Google-owned app hasn’t yet been updated with CarPlay support. However, Waze last week invited beta users to test its CarPlay-support app. Baidu maps is also expected to come to CarPlay.
Google Maps on CarPlay includes the same features as the mobile app, including search, seeing alternative routes, live updates about traffic jams and delays, and estimated time of arrival information.
Google Maps will also let iPhone owners start navigating from the phone. Then, once connected to CarPlay, it will pick up where it left off.
There’s also the option to download maps in preparation for travel in areas where you could expect to be offline.
Finally, users can access saved lists from Google Maps on CarPlay, and the app features real-time traffic updates for those who use it for the commute between home and work.
iPhone users can now use Google Maps to navigate in their car’s built-in display. Source: Google
The US State Department has confirmed a data breach which has led to the exposure of employee data.
More security news
As reported by Politico, the personally identifiable information (PII) of some of the State Department’s workforce has been exposed, however, the data breach is not thought to impact more than one percent of the staff roster.
“We have determined that certain employees’ personally identifiable information (PII) may have been exposed,” an alert states, dated September 7. “We have notified those employees.”
The security notice was marked “Sensitive but Unclassified.” No technical details of the security incident have been released to the public, nor who may be responsible.
According to the department, the impacted email system is considered unclassified, and there is no evidence to suggest other, classified email networks have also been compromised.
The State Department says it is currently investigating the incident and is “working with partner agencies to conduct a full assessment” of the data breach.
“Like any large organization with a global presence, we are a constant target for cyberattacks,” the State Department said. “This is a good opportunity to remind everyone that we all play an important role in protecting Department information, especially when it comes to the use of secure and safe passwords, and reporting suspicious activity.”
Indeed it is, but it was only last week that the department was heavily criticized for poor security practices.
In a letter sent to Secretary of State Mike Pompeo, five US senators demanded to know why few basic security measures were in place to secure the department’s systems, such as the use of multi-factor authentication (MFA). A report published by the General Service Administration (GSA) has suggested that only 11 percent of “high-value” devices used by the department had MFA enabled.
The State Department says that steps “have been taken” to secure systems and employees involved in the data breach will be given three years of free credit monitoring.
The exposure of sensitive information belonging to federal employees is appalling but does not come close to the 2015 Office of Personnel Management (OPM) data breach, in which close to 22 million employee records were exposed in two separate attacks.
“We are working with the interagency, as well as the private sector service provider, to conduct a full assessment,” a State Department official told the Washington Examiner. “The Department is always actively engaged in identifying cybersecurity threats and protecting its networks. This is an ongoing investigation. We have no additional information to share at this time.”
If you’re one of the people who own a stylus or touchscreen-capable Windows PC, then there’s a high chance there’s a file on your computer that has slowly collected sensitive data for the past months or even years.
This file is named WaitList.dat, and according to Digital Forensics and Incident Response (DFIR) expert Barnaby Skeggs, this file is only found on touchscreen-capable Windows PCs where the user has enabled the handwriting recognition feature [1, 2] that automatically translates stylus/touchscreen scribbles into formatted text.
The handwriting to formatted text conversion feature has been added in Windows 8, which means the WaitList.dat file has been around for years.
The role of this file is to store text to help Windows improve its handwriting recognition feature, in order to recognize and suggest corrections or words a user is using more often than others.
“In my testing, population of WaitList.dat commences after you begin using handwriting gestures,” Skeggs told ZDNet in an interview. “This ‘flicks the switch’ (registry key) to turn the text harvester functionality (which generates WaitList.dat) on.”
“Once it is on, text from every document and email which is indexed by the Windows Search Indexer service is stored in WaitList.dat. Not just the files interacted via the touchscreen writing feature,” Skeggs says.
Since the Windows Search Indexer service powers the system-wide Windows Search functionality, this means data from all text-based files found on a computer, such as emails or Office documents, is gathered inside the WaitList.dat file. This doesn’t include only metadata, but the actual document’s text.
“The user doesn’t even have to open the file/email, so long as there is a copy of the file on disk, and the file’s format is supported by the Microsoft Search Indexer service,” Skeggs told ZDNet.
“On my PC, and in my many test cases, WaitList.dat contained a text extract of every document or email file on the system, even if the source file had since been deleted,” the researcher added.
Furthermore, Skeggs says WaitList.dat can be used to recover text from deleted documents.
“If the source file is deleted, the index remains in WaitList.dat, preserving a text index of the file,” he says. This provides crucial forensic evidence for analysts like Skeggs that a file and its content had once existed on a PC.
The technique and the existence of this file have been one of the best-kept secrets in the world of DFIR and infosec experts. Skeggs wrote a blog post about the WaitList.dat file back in 2016, but his discovery got little coverage, mostly because his initial analysis focused on the DFIR aspect and not on the privacy concerns that may arise from this file’s existence on a computer.
But last month, Skeggs tweeted about an interesting scenario. For example, if an attacker has access to a system or has infected that system with malware, and he needs to collect passwords that have not been stored inside browser databases or password manager vaults, WaitList.dat provides an alternative method of recovering a large number of passwords in one quick swoop.
Skeggs says that instead of searching the entire disk for documents that may contain passwords, an attacker or malware strain can easily grab the WaitList.dat and search for passwords using simple PowerShell commands.
Skeggs has not contacted Microsoft about his findings, as he, himself, recognized that this was a part of an intended functionality in the Windows OS, and not a vulnerability.
This file is not dangerous unless users enable the handwriting recognition feature, and even in those scenarios, unless a threat actor compromises the user’s system, either through malware or via physical access.
While this may not be an actual security issue, users focused on their data privacy should be aware that by using the handwriting recognition feature, they may be inadvertently creating a giant database of all the text-based files found on their systems in one central location.
According to Skeggs, the default location of this file is at:
Not all users may be storing passwords in emails or text-based files on their PCs, but those who do are advised to delete the file or disable “Personalised Handwriting Recognition” feature in their operating system’s settings panel.
Back in 2016, Skeggs also released two apps[1, 2] for analyzing and extracting details about the text harvested in WaitList.dat files.
AMD doesn’t have a fantastic record when it comes to server processors, but its new EPYC products are finding favour among customers looking for more ‘bang for the buck’ — especially when it comes to HPC (High Performance Computing) and, interestingly, storage applications. What customers want, customers get so, despite an initial lack of enthusiasm from vendors used to simply following Intel, we’re seeing a growing number of new EPYC-powered servers being released. Before we look in detail at some of these, however, let’s summarise the advantages for buyers of AMD EPYC compared to Intel Xeon servers.
A lot of column inches have been devoted to the differences between two architectures, but the headline is in core count, with the Xeon Scalable Processor Family topping out at 28 cores per socket while AMD’s EPYC 7000 Series processors can have up to 32.
EPYC processors also benefit from eight memory channels and support for up to 2TB of memory per socket compared to 6 channels and 1.5TB of RAM with Xeon. AMD also wins out big-time on I/O, with support for 128 PCIe lanes per socket whereas Xeons support just 48. Intel processors, on the other hand, have more cache, plus support for the latest 512-bit AVX instruction set — although code has to be rewritten to exploit these extensions, which are of most interest to developers of high-end HPC applications.
There has also been much debate over real-world performance differences and cost differentials, plus there’s also the little matter of brand loyalty. Still, AMD is causing a real stir in the server market and it didn’t take much effort to find three new products to examine — two from SuperMicro specialist Boston Limited and the third from Dell-EMC.
The first of the Boston servers is a new take on its existing SuperMicro-based Quattro, effectively a 2U mini blade platform designed to accommodate four server sleds through slots at the rear, each holding an independent motherboard with two AMD EPYC processors on board. A redundant pair of 1100W slimline power supplies keep the new Quattro running, while at the front there are 24 low-profile (2.5-inch) storage bays organised into four sets to give each server six of its own for direct attached storage.
Any of the EPYC 7000 series processors can be specified, and because the servers are independent you can fit different processors in each sled to suit the expected workloads. The chips themselves are then covered by specially designed heatsinks to insure good airflow in the limited space available, with shared fans in the main chassis to push the air around. That said, it’s worth noting that, with the trend towards ever higher processor TDPs (up to 180W on the 32-core EPYCs), cooling needs to carefully planned. As such, you’re advised to have no more than four Quattro chassis in a rack without additional cooling measures.
In terms of memory there’s a set of 16 DIMM slots arranged either side of the processor sockets, enabling each server to have up to 2TB of RAM using ECC DDR4 modules clocked at 2,666MHz. Advice here, however, is to fully populate with DIMMs regardless of capacity in order to maximise performance. That’s due to the way memory is accessed and shared between the four CPU modules that comprise the EPYC processor.
On the I/O front, an integrated 12Gbps SAS controller is used to connect the processors to the storage at the front of the chassis with the drive bays physically connected to each server by a shared backplane. The drives themselves can be either 2.5-inch SAS SSDs or, for maximum performance four NVMe and two SAS devices. A separate all NVMe configuration is also available, and each node additionally has an on-board M.2 interface to take an NVMe storage card, which customers routinely specify to boot the servers.
A dedicated Ethernet interface is built onto the motherboard for use by the integrated IPMI remote management controller, while server networking is handled by a proprietary SIOM (Super I/O Module). This fits into a custom PCIe connector located at the rear of the motherboard.
A number of SIOMs are available to order, providing Ethernet, InfiniBand and Omni-Path connectivity at up 100Gbps, all with pass-through support for IPMI remote management. The network ports are at the back and with four sleds crammed into just 2U the wiring can get messy. Still, the end result is network connectivity that leaves the two PCIe x16 slots free for other purposes.
Who’s it for?
Capable of hosting up to 256 processing cores in just 2U of rack space, the Quattro will be of interest to both HPC customers and buyers building large-scale VM farms. Storage is limited, but the ability to plug and play servers makes it easy to scale, with customers able to start with just one or two server sleds and add more when demand rises. That capability makes the Quattro a good choice for buyers seeking a hyperconverged infrastructure (HCI) platform. Indeed, market leader Nutanix used something very similar to power its appliances when it started out, and with EPYC processors on-board the concept is even more compelling.
The second Boston product is a 1U single-socket server which, at first glance, appears to be an entry-level or small business solution. However, that’s far from the case thanks to its AMD EPYC processor, 2TB memory capacity and support for up to 10 super-fast NVMe drives. Boston positions its new server as a powerful and cost-effective alternative to more expensive 2P Xeon platforms. Crucially, VMware vSphere and vSAN are all licenced on a per-socket basis, enabling customers to save thousands in licensing costs by upgrading from 2P Xeon to this kind of single-socket EPYC server while also boosting performance.
We looked at pre-production model of the 1U server which, just like the Quattro, is based on a Supermicro motherboard — this time with just the one socket to take any of the EPYC 7000 Series processors, including 32-core variants.
Resplendent beneath heatsink and plastic ducting, the processor is sandwiched between sixteen DIMM slots capable of taking a full 2TB of ECC DDR4 2,666MHz memory — the same as on the 2P motherboards used in the Quattro.
Redundant 750W power supplies and Integrated IPMI remote management with a dedicated Ethernet interface come as standard, with two additional 10GBase-T ports for wider network connectivity. The server also has two full-height PCIe x16 expansion slots and a further low-profile socket, although it’s on the storage side that things start to get really interesting as it’s all NVMe and pretty impressive for a server of this size.
To start with, there are two M.2 connectors on the motherboard to take NVMe adapters to boot the Boston server and provide limited system storage. Beyond that, however, the 1U chassis has an impressive set of ten 2.5-inch drive bays arranged across the front and, similarly, cabled for use with NVMe solid state drives — although four can be used to accommodate SAS/SATA devices if needed.
For networking there are two 10Gbps Ethernet ports managed by an integrated Broadcom controller, plus a separate Gigabit port for IPMC remote management. That leaves the three PCIe x16 slots for further expansion.
Who’s it for?
The 1U Boston 1113S-WN10RTA is a completely different breed of server from the Quattro. At one level it will appeal to budget-conscious companies seeking to reduce their virtualisation costs by switching from 2P Xeon to 1P EPYC servers. On another, enterprise customers wanting a high-performance storage platform may be interested. Indeed, with that in mind, Boston is talking to partners about developing highly scalable SDS (Software Defined Storage) solutions that combine the 1U server with NVMe over fabric technologies.
Finally, there’s the Dell EMC PowerEdge R7415 which, like the 1U Boston, is a single-socket server capable of accommodating any EPYC 7000 Series processor and pairing it with up to 2TB of memory. Physically, however, it’s a much larger 2U system capable of accommodating up to 24 2.5-inch hot-swap SATA/SAS or NVMe drives.Dell EMC markets it both as a standalone server and as a validated vSAN node ready to exploit the licence savings made possible by having only one CPU.
A highly configurable platform, the review server was fitted with a 32-core EPYC 7551P processor with just eight of the available DIMM slots filled using 32GB DDR4 RDIMMs, adding up to 256GB altogether. These are located in the middle of the chassis with an impressive heatsink on top of the AMD chip to keep it cool, with the usual arrangement of memory slots on either side.
Network connectivity is handled through a LAN on Motherboard (LOM) arrangement with two Gigabit ports built in as standard, plus an optional mezzanine card which, on the system we looked at, added two more 10GbE ports. You also get the usual embedded iDRAC remote management and lifecycle controller, plus lots of fans to maintain an even temperature and a redundant pair of 800W power supplies to keep the server running. There’s even space to accommodate up to four PCIe expansion cards but, as with the Boston 1U server, it’s the storage options that will be of most interest to buyers of the AMD-powered PowerEdge.
Those options here start with a choice between a chassis with just 12 3.5-inch SATA/SAS drive bays at the front (plus an optional extra two at the rear) or enough bays to take 24 low-profile (2.5-inch) devices using a mix of SATA/SAS and NVMe technologies. The bays are all hot-pluggable, supported by a fixed backplane that, on the review system, was split so that half the bays were cabled for pure SAS/SATA and the other half the full mix of SATA/SAS and direct-connect NVMe. The bays on the review system were only sparsely populated with a pair of 1.6TB NVMe U.2 drives in bays on the right side of the chassis and five 400GB 12Gbps SAS SSDs at the other end in the slots without NVMe support.
The NVMe drives are, of course, connected to the processors by the PCIe bus, while the SAS SSDs were cabled into a PERC H740p RAID controller located in a custom socket on the motherboard.
Along with the cables needed for NVMe, there’s a lot of wiring in a very small space, although the end result is surprisingly tidy and workmanlike. It’s also a very scalable storage setup, hence why it’s being offered as a preconfigured vSAN node.
Who’s it for?
According to the Dell EMC website, the PowerEdge R7415 is optimised for virtualisation and business analytics as well as scale-out, high-capacity SDS — much like the 1U Boston server. With its greater storage capacity and management options, however, the R7415 is clearly targeted at a more demanding enterprise demographic with bigger budgets, able to use the extra capacity to support big data, hybrid cloud and other storage-hungry applications.
EPYC: The bottom line
So there we have it: three very different servers designed to address the needs of distinct market segments, but all looking to do so by taking full advantage of the extra cores, memory channels and PCIe lanes provided by the AMD EPYC processor.
As well as performance benefits compared to Xeon-based alternatives, cost savings are also possible — although cheaper processors are a only a small part of that equation. In fact, there’s a much greater benefit to be had from the ability to do more with less, to reduce server spend and also to save on licensing by switching from 2P Xeon to single-socket EPYC platforms. Because server configurations vary, those benefits are hard to quantify, but there are definite savings to be had — and a growing number of buyers are prepared to go for EPYC over Xeon in order to realise them.
Telstra’s next 5G launch will be based around smart cities, Telstra ED of Network and Infrastructure Engineering Channa Seneviratne has revealed, with the telco choosing each of its 200 initial launch sites for showcasing different use cases.
“We will launch another regional centre which I can’t name yet where we’re going to do smart cities,” Seneviratne told media during Telstra Vantage 2018 in Melbourne on Wednesday.
Seneviratne said the 5G launch in Toowoomba was based on a partnership with FKG Group, which had opened a Tier III datacentre in the region earlier this year, and on enabling agricultural technology applications.
“Within 100km of Toowoomba, you’ve got every single different type of agriculture … Toowoomba is becoming a centre where there’s advanced agri-tech being developed,” he said.
“They’re creating an advanced industrial precinct for high-tech agri-tech, so for us to provide them with 5G coverage is a really important thing to enable this next wave of industrial development.”
“We’ve got a very clear plan and roadmap in progress for where that deployment is going to happen, and we’ve got all of our partners lined up to support that rollout,” Penn told ZDNet.
“We need the handset and the device manufacturers to start building equipment at scale now. Initially by having the 5G network ready, what that enables us to do is to trial and test the early versions of the handsets and dongles and mobile hotspots and tablets, and the manufacturers come through and test them outside of the lab environment and in a commercial environment.”
“It’s not just the industrial sector that are using these types of technologies; we’re also working with customers across the board in other areas, and financial technology is one of those, and financial services,” Denholm said.
“If you look at what we’re doing with Commonwealth Bank today … we’re actually working with them today on what 5G can enable in their mobile banking of the future.”
Penn added on Wednesday that Telstra is “absolutely leading the world in 5G”, and has “always punched above its weight” on the international stage.
According to Penn, Telstra has a very good relationship with US carrier Verizon and its CEO Hans Vestberg, telling ZDNet that Telstra is looking to offer similar fixed-wireless products as Verizon 5G Home, which was announced last week at Mobile World Congress Americas (MWCA) in Los Angeles.
“We already provide a fixed-wireless option for customers that want to have a fixed-wireless option,” Penn told ZDNet.
“We currently with 4G customers have a home service, they can use it at home as well … there will be those offerings under 5G.
“Fixed-wireless will be an option for us, it will be a use case in the future, but our 5G strategies are much broader than that.”
Disclosure: Corinne Reichert travelled to Telstra Vantage 2018 in Melbourne as a guest of Telstra
Companies are increasingly dependent on mobile platforms to power their business operations and to enable a productive workforce – and that means hiring top-notch developers to build the apps they need.
PDF annotation app Flexcil will launch an Android version next year backed by its strong fan base in the iOS sphere and rising demand for education catered apps, its co-founder says.
“Some of our users are asking our launch schedule to decide whether to buy the iPad or an Android device,” said Park Ji-hoon, co-founder and COO of Flexcil, in an interview with ZDNet. “I think we have succeeded in building a dedicated fan base and want to meet this demand with expansion to Android as soon as possible.”
Flexcil, aiming to become an education solution for academics and students, l aunched in April 2017. It allows PDF files to be uploaded via an app. The user can scribble notes, highlight text using a touch pen or their fingers, and “drag & drop” text to a separate notepad. Users can also use gestures as shortcuts instead of clicking multiple buttons. The aim was to bring out the best in both analogue and digital education settings.
Since that time, it has secured 200,000 users. As of August 18, it had 60,000 monthly active users and is used 60 minutes a day, on average. The app comes in a free version and as well as a $8.99 paid version. It has added templates and backup features, as well as a ruler interface and ability draw a straight line this year.
The biggest coup for the firm followed an offer from Apple to have Flexcil installed on iPads on display in over 500 Apple Stores in 24 countries, which took effect in late March. It was chosen as among Apps We Love on App Store’s Promotional App category in China in July.
Stylus use a booster
To challenge iPad dominance, companies like Samsung Electronics and Microsoft are launching tablets as professional tools for content creation, often with a stylus in tow. Samsung recently launched the Galaxy Tab S4 armed with its S-Pen, while the Apple Pencil aimed at those using iPads.
“Tablets have up until now failed to find a clear differentiator between smartphones and laptops, so it has been more of an ‘ok’ device for content consumption,” said Park. “So Apple, Samsung and Microsoft are actively expanding the application of stylus and positioning their tablets as contents creation tools. This will expand the education market, lower its entry barrier and help create an ecosystem. This is a big opportunity for Flexcil,” said Park.
As tablets prices drop further, and are deployed more in the education sector, the COO said Flexcil has a ready-made app that can meet the rise in demand.
The company was working with GreenBulb, a start-up which made a price-competitive stylus called SonarPen. The stylus was a hit on Kickstarter and can be used with the iPad via a cable that connects with earphone jack. Flexcil was preparing a UX catered to SonarPen, Park said.
“Cross platform is becoming a must so that is our long term goal. When we take a look at the way students interact with devices, they use typing for fast memo in laptops, scribble on note pads and search for compact information on mobile devices,” said Park. “We want to evolve to a service that can support students’ individual preferences for devices and study patterns.”
The company is preparing to launch an iPhone version with support Cloud Sync within the year.
“There are new education services, technology and SDK being introduced. Our plan is to connect our own services and partners through the cloud,” said the COO. The company was hopeful that the expansion will allow Flexcil to clinch B2B clients, especially schools.
“Teachers and IT managers at K-12 schools in the US have made inquiries,” he said. “This is an area we are interested in.”
“We will continue to improve Flexcil so that students can user it easily and if demand rises, we are also considering a lite version.”
Telstra has announced partnering with “major water utilities” on its Digital Water Metering Internet of Things (IoT) solution, which it said will prevent water wastage and reduce water consumption by providing insights to better manage usage.
This will reduce operating costs for water utilities and decrease water bills for consumers, Telstra said, with the telco having trialled its water IoT solution with multiple companies for the past year.
“Whether it’s leakage prevention with smart water meters or environmental monitoring to keep our oceans clean and estuaries safe from contaminants, water utilities across the country are using IoT technology to better track, monitor, and conserve water,” the company said at the Telstra Vantage 2018 conference in Melbourne on Wednesday.
Telstra’s narrowband-IoT (NB-IoT) network now covers more than 3.5 million square kilometres, it said, with its Digital Water Metering solution operating across this to provide battery life of 12 to 15 years.
Telstra had launched its NB-IoT network in January during CES 2018, with current COO and incoming CFO Robyn Denholm at the time saying the NB-IoT network will provide connectivity for IoT devices with smaller packets of data being sent, such as sensors in the mining, agricultural, transport, logistics, manufacturing, and industrial IoT industries.
“We’re using both protocols,” Denholm told ZDNet in January.
“The two cover sort of different use cases; the Cat-M1 is more devices that are on the move that need hundreds of kilobits per second, whereas the NB-IoT are very small packets of information.”
Across its IoT suite, Telstra in August also unveiled its new set of IoT tracking solutions, with a consumer-focused Telstra Locator product and an enterprise-focused Track and Monitor solution.
The telco said Telstra Locator, which will launch as a subscription-based service for post-paid customers later this year, will help customers find lost valuables.
Three locator tags are launching for this option: A Bluetooth tag for small items like keys and purses; a rechargeable Wi-Fi tag with four to six weeks of battery life using the more than 1 million Telstra Air hotspots for items such as pets, bikes, and bags; and a “premium” LTE tag utilising the telco’s Cat M1 IoT network for higher-value assets for small businesses, which will launch early next year.
Customers can then use the Telstra Locator App to find their tagged items, an app which head of Innovation and Strategy for Consumer and Small Business Michele Garra said was developed in-house.
Track and Monitor, meanwhile, will launch in October and allow businesses to trace and manage assets. While Telstra wouldn’t be drawn on the pricing, it did say the solution would “enable low-cost, large-volume asset tracking, whether across multiple warehouses or retail sites or while in transit”.
Disclosure: Corinne Reichert travelled to Telstra Vantage 2018 in Melbourne as a guest of Telstra
IBM said it will launch cloud software designed to manage artificial intelligence deployments, detect bias in models and mitigate its impact and monitor decision across multiple frameworks.
The move by IBM highlights how AI management is becoming more of an issue as companies deploy machine learning and various models to make decisions. Executives are likely to have trouble understanding models and the data science under the hood.
IBM said its technology will monitor AI so enterprises comply with regulations. In addition, IBM’s software works with models build on machine learning frameworks such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.
Meanwhile, IBM said it will open source IBM Research’s bias detection tools via what it calls its AI Fairness 360 toolkit. The toolkit will provide a library of novel algorithms, code and tutorials. The hope is that academics, researchers and data scientists will integrate bias detection into their models. IBM’s AI bias detection tools are on Github.
Ritika Gunnar, vice president of IBM Watson Data and AI, said in an interview that the lack of trust and transparency with AI models are holding back enterprise deployments at scale and in production. Simply put, models are still on the shelf due to concerns about how the real-time decision making can harm a business. “It’s a real problem and trust is one of the most important things preventing AI at scale in production environments,” she said.
Strategically, IBM’s move makes sense. IBM is hoping to provide Watson AI, but also manage AI and machine learning deployments overall. It’s just a matter of time before AI Management becomes an acronym among technology vendors. IBM said it is planning to provide explanations that show how factors were weighted, confidence in recommendations, accuracy, performance, fairness and lineage of AI systems.
IBM said it will also offer services for enterprises looking to better manage AI and avoid black box thinking.
Big Blue’s research unit recently penned a white paper outlining its take on AI bias and how to prevent it. IBM’s Institute for Business Value found that 82 percent of enterprises are considering AI deployments, but 60 percent fear liability issues.
Gunnar noted that AI bias goes well beyond the factors such as gender and race. One scenario of AI bias could revolve around a claims insurance process and an adjuster making a decision to approve or reject a claim. Items such as how long a policy was held, value of a vehicle, age and zip code could play into what Gunnar called “non-societal bias.”
IBM’s Fairness 360 open source tools include tutorials on AI bias on credit scoring, medical expenses and gender bias in facial images.