STK X2, First Take: A good-looking budget smartphone

STK will likely be a new name to many people, but the UK company has a range of handsets in its portfolio. The X2 is STK’s flagship, although it costs just £169.99 (or £179.99 if you opt for the tempered glass variant or want a bundled 32GB MicroSD card). The company’s entry-level smartphone is the Evo, which currently costs £39.99, and you can get an STK feature phone for as little as £12.99.

The X2 is an unashamedly budget phone in terms of specifications and build materials (although the design is neat enough), with a UI overlay on top of Android 8.1. In the box you get a headset with a 3.5mm jack and a silicone bumper, and the handset comes with a three-year warranty. Another plus is the presence of an IR blaster which, coupled with software, can turn your phone into a remote control for TVs and other consumer electronics. There’s no IR software pre-installed here, but there are plenty of choices in the Play Store.

stk-x2-in-hand.jpg

STK’s X2 is a 5.7-inch phone with an 18:9, 720×1440-pixel IPS screen.


Image: Sandra Vogel/ZDNet

My review sample had a glass back which, although not a particular fingerprint magnet, was very smooth. This makes the handset slippery to hold, causing it to slide off my armchair at times. A circular fingerprint sensor sits below the raised housing for the two camera lenses. There’s also a gold version of this phone, whose back must be much more of a mirror than the one I had. The phone is relatively thick, which actually makes it easier to hold with confidence one-handed.

stk-x2-gold.jpgstk-x2-gold.jpg

The STK X2 comes in (shiny) gold as well as black.


Image: STK

STK is shy about some of the handset’s specifications. It doesn’t say that Gorilla Glass is used for the screen, which means it probably isn’t. Nor are the dimensions to be found on the product page online, though press materials put the thickness at 8.2mm, which seems right to me. I measured the phone at 152mm tall and 72mm wide.

The 5.7-inch screen is close to edge-to-edge on the long edges, but there are sizeable bezels at the top and bottom. The display resolution of 720 by 1,440 pixels (282ppi) is moderate, but the 18:9 aspect ratio is up to date. There’s no front-camera notch, but few will be bothered about that. The IPS screen is perfectly clear and sharp for most activities, including reading web pages.

The single speaker delivers rather tinny sound and isn’t really up to the job of accompanying TV catchup or music listening.

Top ZDNET Reviews

The X2’s MediaTek MT6750V chipset is supported by 4GB of RAM, a combo that delivered middling Geekbench 4 scores of 2623 (multi-core) and 661 (single core). I saw similar benchmarks recently from the Palm Palm mini-handset, and the Honor 7A. There was a notable wait while apps launched, and screen presses were sometimes a little slow to generate a response. I also had to wait a while for the fingerprint sensor to do its job, and in general things felt somewhat laggy.

The front camera is a 16MP unit, as is the main camera at the back. There’s a second rear camera whose specification is not given online, but which is a 0.3MP unit for depth sensing. The rear camera setup struggled a bit in low light conditions, and focusing in general was a bit hit-and-miss — sometimes the camera simply refused to focus on what I asked it to, instead making its own decisions about what should and should not be in focus. I found that moving away and returning to the subject tended to fix this problem.

There is 64GB of internal storage, of which 56.65GB was free out of the box. This can be boosted by putting a MicroSD card slot into one of the two SIM trays. I noted at the start of this review that Android 8.1 is supplemented by a UI overlay. This is very light touch: STK adds an FM radio into the mix, and a couple of other management apps, but interference with Android is minimal and there are no apps that duplicate Android functions to confuse you.

The 3000mAh battery is on the small side these days, and the X2’s battery life reflects this. The Geekbench battery test saw it last for 6 hours 38 minutes and gave it a mediocre rating of 2264. In the real world, a day’s use should be possible, but not if you want to make much use of battery-hungry apps like games or services like GPS. The battery is charged by Micro-USB rather than USB-C, and there’s no facility for fast charging or wireless charging.

I did have one major problem with this handset, and that was its desire to take a rest every once in a while, completely unbidden. It seemed to get over that after a few days’ use, and it’s entirely possible that a software update will fix it, but it’s unnerving to receive a phone for review with a self-reset issue.

STK has arguably committed a marketing blunder by saying that the original price of this handset was £299.99 (or £309.99 for the tempered glass/32GB MicroSD options). At that price I would have been unimpressed. It’s much more palatable at £169.99 or £179.99. Still, I’d still suggest anyone interested in getting a smartphone in this price bracket does a little shopping around: the Moto G6 or Nokia 6.1 are among the leading alternatives.

RECENT AND RELATED CONTENT

Smartphone market ‘a mess’ but annual tablet sales are also down
Apple tops the list for most shipped tablets in 2018, while Huawei increased its smartphone penetration during the year.  

Samsung mobile marks lowest profit since 2016 battery debacle
Samsung’s mobile business has posted an operating profit of 1.5 trillion won for the fourth quarter of 2018 — its lowest since the Galaxy Note 7 battery debacle in 2016.

Chinese handsets continue making inroads in Europe as mid to low end get squeezed
Samsung and Apple are doing well, as Sony, LG, and Wiko feel the pain.

Android Q: Rumors, features, and everything we know so far
We don’t have a name yet, but we do know quite a bit about the next version of Android.

Best Phones for 2019 (CNET)
These are the top-rated phones on CNET right now.

Read more reviews

Let’s block ads! (Why?)

Link to original source

What is 5G? All you need to know about the next generation of wireless technology

UPDATED: January 2019

Last September, consumers began to see the first service bundles offered by telecommunications companies in their area, marketed with some form of the term “5G.” “5G is here,” declared Verizon CEO Hans Vestberg, specifically for cities such as Sacramento, Los Angeles, and Indianapolis where rival AT&T had already been drumming up excitement around its 5G trials.

It was a bit like SpaceX’s 2016 announcement, its 2017 announcement, and its 2018 announcement that the race to Mars had begun.

Read also: T-Mobile and Sprint merger: The numbers and assumptions you need to know

5G Wireless is an explicit set of technologies specified by the 3rd Generation Partnership Project (3GPP) as “Release 15” and “Release 16.” 3GPP is an organization consisting of essentially all the world’s telecommunications standards bodies who agreed to share the definition of 3G Wireless, and to move on from there to successive generations.

eefinsburypavement-325157.jpg

Overlooked by London’s skyscrapers EE’s 5G mobile trial kicks off.


Image: EE

Who gets to say what 5G really is and isn’t

Today, 3GPP specifies which technologies constitute 5G Wireless and, by exclusion, which do not. 5G is an effort to create a sustainable industry around the wireless consumption of data for all the world’s telcos. One key goal of 5G is to dramatically improve quality of service, and extend that quality over a broader geographic area, in order for the wireless industry to remain competitive against the onset of gigabit fiber service coupled with Wi-Fi.

The 5G transition plan, once complete, would constitute an overhaul of communications infrastructure unlike any other in history. Imagine if, at the close of the 19th century, the telegraph industry had come together in a joint decision to implement a staged transition to fax. That’s essentially the scale of the shift from 4G to 5G. The real reason for this shift is not so much to get faster as to make the wireless industry sustainable over the long term, as the 4G transmission scheme is approaching unsustainability faster than the industry experts predicted.

To make the transition feasible in homes and businesses, telcos are looking to move customers into a 5G business track now, even before most true 5G services exist yet. More to the point, they’re laying the “foundations” for technology tracks that can more easily be upgraded to 5G, once those 5G services do become available.

New business models

180430-06-ericsson-ntt-docomo-5g-trial.jpg180430-06-ericsson-ntt-docomo-5g-trial.jpg

Equipment staged by NTT DOCOMO for 5G urban area trials in Japan.


(Image: Ericsson)

The initial costs of these infrastructure improvements may be tremendous, and consumers have already demonstrated their intolerance for rate hikes. So to recover those costs, telcos will need to offer new classes of service to new customer segments, for which 5G has made provisions. These include:

  • Fixed wireless data connectivity in dense metropolitan areas, with gigabit-per-second or better bandwidth, through a dazzling, perhaps bewildering, new array of microwave relay antennas;
  • Edge computing services that bring computing power closer to the point where sensor data from remote, wireless devices would be collected, eliminating the latency incurred by public cloud-based applications;
  • Machine-to-machine communications services that could bring low-latency connectivity to devices such as self-driving cars and machine assembly robots;
  • Video delivery services that would compete directly against today’s multi-channel video program distributors (MVPD) such as Comcast and Charter Communications, perhaps offering new delivery media for Netflix, Amazon, and Hulu, or competing against them as well.

“It’s not only going to be we humans that are going to be consuming services,” remarked Nick Cadwgan, director of IP mobile networking, speaking with ZDNet. “There’s going to be an awful lot of software consuming services. If you look at this whole thing about massive machine-type communications [mMTC], in the past it’s been primarily the human either talking to a human or, when we have the internet, the human requesting services and experiences from software. Moving forward, we are going to have software as the requester, and that software is going to be talking to software. So the whole dynamic of what services we’re going to have to deliver through our networks, is going to change.”

Driving for higher yields

5G is comprised of several technology projects in both communications and data center architecture, all of which must collectively yield benefits for telcos as well as customers, for any of them to be individually considered successful. The majority of these efforts are in one of three categories:

  • Spectral efficiency — Making more optimal use of multiple frequencies so that greater bandwidths may be extended across further distances from base stations (historically, the main goal of any wireless “G”);
  • Energy efficiency — Leveraging whatever technological gains there may be for both transmitters and servers, in order to drastically reduce cooling costs;
  • Utilization — To afford the tremendous communications infrastructure overhaul that 5G may require, telcos may need to create additional revenue generating services such as edge computing and mobile apps hosting, placing them in direct competition with public cloud providers.

Service tiers

180430-03-itu-5g-usage-scenarios-pyramid.jpg180430-03-itu-5g-usage-scenarios-pyramid.jpg

Projection of interrelated 5G service tiers


(Image:International Telecommunications Union)

It was during the implementation of 4G that telcos realized they wished they had different grades of infrastructure to support different classes of service. 5G allows for three service grades that may be tuned to the special requirements of their customers’ business models:

  • Enhanced Mobile Broadband (eMBB) aims to service more densely populated metropolitan centers with downlink speeds approaching 1 Gbps (gigabits-per-second) indoors, and 300 Mbps (megabits-per-second) outdoors. It would accomplish this through the installation of extremely high-frequency millimeter-wave (mmWave) antennas throughout the landscape — on lampposts, the sides of buildings, the branches of trees, existing electrical towers, and in one novel use case proposed by AT&T, the tops of city busses. Since each of these antennas, in the metro use case, would cover an area probably no larger than a baseball diamond, hundreds, perhaps thousands, of them would be needed to thoroughly service any densely populated downtown area. And since most would not be omnidirectional — their maximum beam width would only be about 4 degrees — mmWave antennas would bounce signals off of each other’s mirrors, until they eventually reached their intended customer locations. For more suburban and rural areas, eMBB would seek to replace 4G’s current LTE system, with a new network of lower-power omnidirectional antennas providing 50 Mbps downlink service.
  • Massive Machine Type Communications (mMTC) [PDF] enables the machine-to-machine (M2M) and Internet of Things (IoT) applications that a new wave of wireless customers may come to expect from their network, without imposing burdens on the other classes of service. Experts in the M2M and logistics fields have been on record saying that 2G service was perfectly fine for the narrow service bands their signaling devices required, and that later generations actually degraded that service by introducing new sources of latency. MMTC would seek to restore that service level by implementing a compartmentalized service tier for devices needing downlink bandwidth as low as 100 Kbps (kilobits-per-second, right down there with telephone modems) but with latency kept low at around 10 milliseconds (ms).
  • Ultra Reliable and Low Latency Communications (URLLC) would address critical needs communications where bandwidth is not quite as important as speed — specifically, an end-to-end latency of 1 ms or less. This would be the tier that addresses the autonomous vehicle category, where decision time for reaction to a possible accident is almost non-existent.  URLLC could actually make 5G competitive with satellite, opening up the possibility — still in the discussion phase among the telcos — of 5G replacing GPS for geolocation.

Plotting the “inflection point”

“The first generation of mobile systems that were launched around 1991 — popularly known as 2G/GSM — was really focused on massive mobile device communication,” explained Sree Koratala, head of technology and strategy for 5G Wireless in North America for communications equipment provider Ericsson, speaking with ZDNet. “Then the next generation of mobile networks, 3G, launched starting in 1998, enabled mobile broadband, feature phones, and browsing. When 4G networks were launched in 2008, smartphones popularized video consumption, and data traffic on mobile networks really exploded.

“All these networks primarily catered towards consumers,” Koratala continued. “Now when you look at this next generation of mobile networks, 5G, it is very unlike the previous generation of network. It’s truly an inflection point from the consumer to the industry.”

180430-01-5g-new-radio-timeline.jpg180430-01-5g-new-radio-timeline.jpg

(Image: 3GPP.org)

The full release of the first complete set of 5G standards (officially “Release 15”) by 3GPP took place in June 2018.  By the end of 2019, the organization expects to declare a supplemental set of 5G standards called “Release 16.” That release is slated to include specifications for:

  • Vehicle-to-Everything (V2X) communications, which would incorporate low-latency links between moving vehicles (especially those with autonomous driving systems) and cloud data centers, enabling much of the control and maintenance software for moving vehicles to operate from within stationary, staffed, and maintained data centers.
  • Satellite access, which may include the ability for satellite transmission to fill in gaps for under-served or geographically remote areas.
  • Wireline convergence, which would finally deliver the outcome that AT&T famously warned Congress was absolutely necessary for the communications industry to survive: the phasing out of separate wireline service infrastructure and the deconstruction of the old telephone lines and circuit-switched networks that were the backbone of the Bell System, and other state-sanctioned monopoly service providers of the 20th century.

Are “5G Evolution” and other intermediate steps necessary for 5G?

The true purpose of 5G Wireless, as you’ll see momentarily, is to produce a global business model where expenses are lower and revenue from services is higher, on account of the presence of more and greater services than 4G could provision for. So there is a valid argument, from a marketing standpoint, in favor of a gradual deconstruction of 4G branding. As consumers hear more and more about the onset of 5G, enumeration leaves them feeling more and more like their 4G equipment is old and obsolete.

With so many technologies under the 5G umbrella — home broadband, office broadband, home television, Internet of Things, in-vehicle communication, as well as mobile phone — there’s no guarantee that, when it comes time, any consumer will choose the same provider for each one unless that consumer is willing to sign a contract beforehand. That’s why telcos are stepping up their 5G branding efforts now, including rolling out preliminary 4G upgrades with 5G monikers, and re-introducing the whole idea of 5G to consumers as a fuzzy, cloudy, nebulous entity that encapsulates a sci-fi-like ideal of the future.

“The general purpose technology for the Fourth Industrial Revolution is actually the ambiguous sort of connectivity that 5G can bring,” admitted Verizon CEO Hans Vestberg, in no less conspicuous an arena than the keynote address of CES 2019.

190108-hans-vestberg-verizon-ceo-ces-2019.jpg190108-hans-vestberg-verizon-ceo-ces-2019.jpg

Verizon CEO Hans Vestberg explains “5G for All” to attendees at CES 2019.


[Photo courtesy Verizon]

“So what is 5G? 5G is a promise,” Vestberg continued, “of so much more than we’ve ever seen in any wireless technology. From the beginning, we had the 1G, the 2G, the 3G, and the 4G. They were sort of leaps of differences, when it comes to speed and throughput. When we think about 5G, we think about 10 gigabits per second throughput, we talk about 10x battery life, we think about 1000 times more data volumes in the networks. It’s just radically different. I would say it’s a quantum leap compared to 4G.”

The first wave of 5G-branded services are effectively 4G, or 4G extensions, that place consumers on the right track for future 5G upgrades, thus guaranteeing the revenue sources that 5G will require to be successful, or if only to just break even.

  • Verizon’s “First on 5G” began with the October 2018 rollout of what’s being called 5G Home — a broadband Wi-Fi service that bundles wireless phone with no-longer-cable TV service, for a price that, after short-term discounts, could rise to as much as $120/month. In the test cities where it was first deployed, 5G Home may utilize wireless spectrum that is indeed being earmarked for 5G. Yet it involved a grade of equipment only capable of 300 megabits-per-second (Mbps) throughput, that would eventually need to be upgraded to 1 gigabit-per-second (Gbps) for it to qualify as 5G technology. In January 2019, Verizon CEO Hans Vestberg indicated to financial analysts that 5G Home rollout may remain limited to the initial test area for some time to come, as the company awaits new standards for customer premise (CP) equipment, probably as part of 3GPP’s Release 16. This after it seemed clear to observers that Verizon was willing to continue rolling out intermediate equipment with a “5G” brand until that time.  
  • AT&T’s “5G Evolution” began in December 2018 with the sudden, unanticipated appearance of a “5G E” icon in the notifications area of 4G customers’ phones. The icon appears if the phone is presently being serviced by a 4G LTE transmitter capable of being upgraded to 5G specifications. Those transmitters may have begun using frequencies over and above those originally reserved for 4G LTE, in addition to those already being used, for greater multiplexing and presumably greater bandwidth, although phones may not necessarily be equipped to receive these extra frequencies, even if they show the “5G E” icon.
  • AT&T’s “5G+” also began in December 2018, and refers to a mobile hotspot service that uses an early version (some would say “prototype”) of the very-high-speed mmWave technology that is being earmarked for 5G, in addition to existing 4G LTE.  The hotspot device itself (Netgear’s Nighthawk 5G Mobile Hotspot) will be sold separately by AT&T for $499, while it offers the service for $70 per month for the first 15 GB. With a theoretical peak throughput of 300 Mbps, it’s conceivable that this device’s initial bandwidth allocation could be completely burned through in less than seven minutes’ time.
  • T-Mobile has said it plans to launch what it characterizes as “true 5G service” to select cities, very soon after the finalization of its merger agreement with Sprint. In a statement, the company says it will need access to the mid-range of 5G spectrum currently delegated for Sprint, in addition to the low- and high-range spectrum T-Mobile currently holds, to deliver the first wave of its services. At the time of this writing, the ongoing federal government shutdown appears to have interfered with these plans.

But consumers aren’t the only parties facing down the need to climb onto a new track. Telcos have their own service providers — for example, Nokia, which has absorbed the assets of the former Alcatel-Lucent, and is now the holder of the massive Bell Laboratories intellectual property portfolio. These providers also need their customers to get with the program. In a January 22 blog post, Nokia’s marketing strategist Clare McCarthy stated that communications service providers (CSPs) can start small if they want, but to become fully 5G compliant, they’ll eventually have to overhaul their communications infrastructure completely. McCarthy wrote:

CSPs can start small by upgrading their radio access software and and maintaining the connection to an existing LTE core network if increased capacity is all they need, but capacity is not all CSPs need.  They see their role as central to a flourishing digital economy.  They need to offer new digital services and support new operating and business models across industry verticals — and this requires more than a radio-only upgrade.  Delivering a new kind of business requires a network capable of higher speeds, greater spectral efficiency, a cloud native core and a coherent, end-to-end framework.  CSPs need to deploy a fundamentally different infrastructure to meet the needs for greater capacity, latency and extreme reliability.

Why cooling made 5G an urgent necessity

In May 2017, AT&T President of Technology Operations Bill Hogg declared the existing wireless business model for cell tower rental, operation, and maintenance “unsustainable.” Some months earlier, a J. P. Morgan analyst characterized the then-business model for wireless providers in Southeast Asia as unsustainable, warning that the then-current system has rendered it impossible for carriers to keep up with customer demand. And as research firm McKinsey & Company asserted in a January 2018 report, the growth path for Japan’s existing wireless infrastructure is becoming “unsustainable,” rendering 5G for that country “a necessity.”

One senses a theme.

The world’s telcos need a different, far less constrained, business model than what 4G has left them with. The only way they can accomplish this is with an infrastructure that generates radically lower costs than the current scenario, particularly for maintaining, and mainly cooling, their base station equipment.

Read also: Stingray spying: 5G will protect you against surveillance

Cooling and the costs associated with facilitating and managing cooling equipment, according to studies from analysts and telcos worldwide, account for more than half of telcos’ total expenses for operating their wireless networks. Global warming (which, from the perspective of meteorological instrumentation, is indisputable) is a direct contributor to compound annual increases in wireless network costs. Ironically, as this 2017 study by China’s National Science Foundation asserts, the act of cooling 4G LTE equipment alone may contribute as much as 2 percent to the entire global warming problem.

The world’s biggest example

180430-02-china-mobile-bs-cost-estimates.jpg180430-02-china-mobile-bs-cost-estimates.jpg

China Mobile’s breakdown of its annual capital and operational expenditures for maintaining one 3G base station.


(Image: China Mobile)

The 2013 edition of a study by China Mobile, that country’s state-licensed service provider, examined the high costs of maintaining energy-inefficient equipment in its 3G wireless network, which happens to be the largest on the planet in both territory and customers served. In 2012, CM estimated its network had consumed 14 billion kilowatt-hours (kWh) of electricity annually. As much as 46 percent of the electricity consumed by each base station, it estimated, was devoted to air conditioning.

That study proposed a new method of constructing, deploying, and managing network base stations. Called Cloud architecture RAN (C-RAN), it’s a method of building, distributing, and maintaining transmitter antennas that history will record as having triggered the entire 5G movement.

Read also: Samsung and KDDI complete 5G trial in baseball stadium

One of the hallmarks of C-RAN cell site architecture is the total elimination of the on-site base band unit (BBU) processors, which were typically co-located with the site’s radio head. That functionality is instead virtualized and moved to a centralized cloud platform, for which multiple BBUs’ control systems share tenancy, in what’s called the baseband pool. The cloud data center is powered and cooled independently, and linked to each of the base stations by no greater than 40km of fiber optic cable.

180430-07-ericsson-ntt-docomo-5g-transmitter.jpg180430-07-ericsson-ntt-docomo-5g-transmitter.jpg

An Ericsson 5G transmitter used in NTT DOCOMO’s Japan trials.


(Image: Ericsson)

Moving BBU processing to the cloud eliminates an entire base transmission system (BTS) equipment room from the base station (BS). It also completely abolishes the principal source of heat generation inside the BS, making it feasible for much, if not all, of the remaining equipment to be cooled passively — literally, by exposure to the open air. The configuration of that equipment could then be optimized, like the 5G trial transmitter shown above, constructed by Ericsson for Japan’s NTT DOCOMO. The goal for this optimization is to reduce a single site’s power consumption by over 75 percent.

What’s more, it takes less money to rent the site for a smaller base station than for a large one. Granted, China may have a unique concept of the real estate market compared to other countries. Nevertheless, China Mobile’s figures show that rental fees with C-RAN were reduced by over 71 percent, contributing to a total operational expenditure (OpEx) reduction for the entire base station site of 53 percent.

Read also: T-Mobile and Sprint to merge, finally, strutting 5G clout

Keep in mind, though, that China Mobile’s figures pertained to deploying and maintaining 3G equipment, not 5G. But the new standards for transmission and network access, called 5G New Radio (5G NR), are being designed with C-RAN ideals in mind, so that the equipment never generates enough heat to trip that wire, requiring OpEx to effectively quadruple.

The new cloud at the new edge

It would appear a lot of the success of 5G rests upon this new class of cloud data centers, into which the functionality of today’s baseband units would move. As of now, there is still considerable uncertainty as to where this centralized RAN controller would reside. There are competing definitions.

Some have taken a good look at the emerging crop of edge data centers sprouting adjacent to today’s cell towers, and are suggesting that the new Service Oriented Core (SOC) could be distributed across those locations. Yet skeptics are wondering, why bother with the elimination of the BTS station in the first place, if the SOC would only put it back? Alternately, a separate SOC station could be established that services dozens of towers simultaneously. The problem there, obviously, is that such a station would be a full-fledged data center in itself, which would have real estate and cooling issues of its own.

Either option might be more palatable, some engineers believe, if the servers operating there could delegate computing infrastructure among internal operations and special customer services — edge computing services that could compete with cloud providers such as Amazon and Microsoft Azure, by leveraging much lower latency. The ability to do so is entirely dependent upon a concept called network slicing. This is the subdivision of physical infrastructure into virtual platforms, using a technique perfected by telecommunications companies called network functions virtualization (NFV).

Also: Microsoft Azure: Everything you need to know about Redmond’s cloud service 

The dicey subject of slicing

180430-04-ngmn-network-slicing-suggestion.jpg180430-04-ngmn-network-slicing-suggestion.jpg

One scenario mobile operators envision for 5G network slicing.


(Image: Next Generation Mobile Networks Alliance)

Exactly what routes these network slices would take through the infrastructure is completely up in the air. T-Mobile and others have suggested slices could divide classes of internal network functions — for instance, dividing eMBB from mMTC from URLLC. Others, such as the members of the Next Generation Mobile Networks Alliance (NGMN), suggest that slices could effectively partition networks in such a way (as suggested by the NGMN diagram above) that different classes of user equipment, utilizing their respective sets of radio access technologies (RAT), would perceive quite different infrastructure configurations, even though they’d be accessing resources from the same pools.

Another suggestion being made by some of the industry’s main customers, at 5G industry conferences, is that telcos offer the premium option of slicing their network by individual customer. This would give customers willing to invest heavily in edge computing services more direct access to the fiber optic fabric that supports the infrastructure, potentially giving a telco willing to provide such a service a competitive advantage over a colocation provider, even one with facilities adjacent to a “carrier hotel.”

Read also: Micro circuitry innovation needed to implement 5G

But depending upon whom one asks, slicing networks by customer may actually be impossible. There are diametrically split viewpoints on the subject of whether slicing could congregate telco functions and customer functions together on the same cloud. Some have suggested such a convergence is vitally necessary for 5G to fulfill the value proposition embodied in C-RAN. Architects of the cloud platforms seeking to play a central role in the SOC, such as OpenStack and CORD, argue that this convergence is already happening, and the whole point of the architecture in the first place.

AT&T has gone so far as to suggest the argument is moot and the discussion is actually closed: Both classes of functions have already been physically separated, not virtually sliced, in the 5G specifications, its engineers assert. In a January 2019 statement, the company said it has already begun deployments of what it calls Multi-access Edge Compute (MEC) services with select customers, in some cases using existing 4G LTE connectivity. “The data that runs through AT&T MEC,” the statement reads in part, “can be routed to their cloud or stay within an enterprise’s private environment to help increase security.”

But AT&T isn’t the “Bell System” any more — it doesn’t get the final say. Thus one of the most critical decisions in 5G architecture may end up being the result of trial and error.

However this issue gets resolved, the very fact that slicing must take place somehow, if only to virtually separate those functions that will not have already been physically separated, suggests that 5G will not be “a fully meshed world of wirelessly connected everything.”  Security — the topic that always waits until the last moment — will ensure that certain things will remain strategically disconnected, for our own good.

The emergence of fixed wireless

Ericsson’s own forecasts of wireless connectivity have been known to fool people. In June 2017, its annual Mobility Report estimated that mobile data traffic would grow at an average compound annual growth rate of 42 percent through 2022, having grown eightfold by the end of that period. “By the end of the forecast period,” stated Ericsson, “more than 90 percent of mobile data traffic will come from smartphones.”

That forecast generated a truckload of headlines. A half-billion 5G mobile subscriptions are expected worldwide by 2022, reported ZDNet’s Corinne Reichert. Ericsson’s updated report, published last November, doubled that forecast number for 2023, adding that 5G access would reach one-fifth of the world’s population by the end of that year.

Read also: Robots could get cheaper, thanks to 5G

The keyword in the above paragraphs is “mobile.” Up to now, all the “Gs” have pertained to the wireless access technology we’ve historically perceived as synonymous with mobility. For 5G to be truly successful, Ericsson’s Koratala told us, it will need to open up access to a broader range of devices, many of which are actually not the least bit mobile.

Featured stories

The not-so-mobile proposition

“These connections are expected to be going into devices in factories, transportation, and the grid,” said Koratala. “So the range of applications means a huge diversification of performance and requirements for communication. Then there are some use cases that might be demanding a 5x improvement in latency, a 100x or 1000x data volume, as well as [extending] battery life. So when you look at that set of requirements, it’s very clear that it is not a single use case. It really becomes an enabler for a wide variety of use cases, that will have different requirements to be met to make them viable.”

The key mission of mMTC is to service wireless devices that don’t move. Its transmission scheme will be tuned for very high density — for situations like factory floors where thousands of individual mechanical elements are sending operational data, simultaneously, to an off-site location for instant analytics.

Read also: UK government seeks city to showcase 5G connectivity

Viewed in this light, the prediction that nine-tenths of mobile data will be consumed by the largest class of mobile devices, seems about as spot-on as a forecast that rain will continue to be wet. What is completely unpredictable at this point is whether a fixed wireless use case will be competitive in an environment where wired broadband is also undergoing a revolution.

Exchanging yesterday’s new technology for today’s

You will hear from many sources that 5G is not about what anything is, but rather what it enables you to do. No, it isn’t. 5G is about the things in which the telecom industry, and to a growing extent the data center networking industry, must invest in order to produce the latest editions of platforms such as V2X and mMTC, so that it can start earning revenue from those services. 5G is all about what it is.

If you end up watching smoothly streaming 4K video on a new class of smartphone, allowing yourself to be ferried between cities in an otherwise unoccupied vehicle, or participate in a virtual, real-time football tournament with a few dozen goggle-wearers scattered throughout the planet, then you will be fulfilling the hopes of telco engineers who hope to make 5G viable. The truth is, none of these consumer technologies are the real reason 5G is being engineered. Indeed, they are the side benefits.

The big gamble

180430-05-at-t-5g-cacti.jpg180430-05-at-t-5g-cacti.jpg

Three experimental AT&T cell tower designs for desert deployment. (Yes, they’re right in front of you.)


(Image: AT&T)

5G is a collective bargain between the telecommunications industry and society. To allow for anything close to evenly distributed coverage over a metropolitan area, the base stations containing the transmitters and receivers (the “cells”) must be smaller, much lower in power, and much greater in number than they are today. Essentially, the new cell towers must co-exist with the environment. An outdoor photograph taken in any direction will be just as likely to include a 5G tower as not. (The example above, provided by AT&T, includes three.)

Read also: How US carriers moved up the timeline on 5G

It would not be unprecedented in history. We’ve borne telephone and electric poles through our neighborhoods and, not all that long ago, willingly installed TV aerials the size of kites on our chimneys. Some of us still use their old mounting posts for our satellite dishes. In exchange for the hopefully minor blemish on our landscapes that 5G may bring, many would wave a cheerful good-bye to dead spots.

All these things must happen, and in relatively quick succession, in order for telcos to afford the infrastructural overhaul they now have no choice but to make.

Explore further — From the CBS Interactive Network

Elsewhere

Previous and related coverage

Qualcomm touts OEM, carrier wins for global 5G launches in 2019

The chip giant announced that its Snapdragon X50 modem chipset was chosen by 19 global operators for upcoming mobile 5G trials.

Sprint eyes mobile 5G network launch for first half of 2019

5G could lead to an increase in customers’ data plans, Sprint’s CEO said.

How 5G will impact the future of farming

ZDNet caught up with Julian Sanchez, director of John Deere Technology Innovation Center, during CES 2018 to talk about how rural connectivity will impact the future of precision agriculture.

Let’s block ads! (Why?)

Link to original source

Best-paying programming languages, skills: Here are the top earners

Wages growth for tech jobs in the US was stagnant in 2018, rising just 0.6 percent from 2017 to an average of $93,244 for the year, accord to Dice’s 2019 tech salary report

Featured stories

Average tech wages haven’t increased since 2015, when the average was actually higher than today at $93,328, according to Dice’s data, and that’s despite historically low levels of unemployment in the sector. 

However, there are a few specialized skills and roles that have seen higher than average growth, which could motivate some into making a career pivot. 

Dice’s survey of 10,780 technology professionals finds that 68 percent would jump ship to get a higher wage, compared with 47 percent who would do it for better working conditions, like remote work and more flexible hours.

As expected, the top-paying tech jobs are held by C-level execs and directors, whose average annual salary grew 3.9 percent over the year to $142,063. 

Salaries for software engineers grew 5.1 percent to $110,898, while technology strategist and architect wages grew eight percent to $127,121. 

Database administrators on average received $103,473 per year but wages grew only 0.2 percent. Meanwhile, web developer and programmer salaries grew 11.6 percent to $82,765. Even technical-support wages saw decent growth of 6.8 percent to $60,600. 

Average wages for software engineers grew 5.1 percent to $111,000, while app-developer wages grew 7.6 percent to $105,200. Other roles that paid between $100,000 to $115,000 include DevOps engineer, hardware engineer, project manager, and security analyst.   

Looking at the most lucrative skills, Dice finds that programmers using Google-developed Go, or Golang, earned the highest on average at $132,827, while programmers using Apache Kafka earned an average of $127,554. 

Besides Go, the top-paying languages, according to a list compiled by ZDNet sister site TechRepublic, are Perl, Shell, Node.js JavaScript, Java/J2EE, TypeScript, Python, Ruby, Swift, and C#. All commanded average wages of between $110,000 and $101,000. 

Skillsets where average annual wages exceeded $120,000 include Amazon DynamoDB, Amazon Redshift, Apache Cassandra, Elasticsearch, RabbitMQ, MapReduce, and SAP HANA.    

Some skills saw significant declines in average wages. The average wage for those skilled in the iOS graph design app declined 12.1 percent to $107,061, while wages for those skilled in Rackspace technology slipped 7.1 percent to $104,782. 

Others broadly defined skills where average wages declined by more than five percent but still exceeded $100,000 include infrastructure as a service, Pure Storage, NetApp, Fortran, 3Par, software-defined networks, Informix, Siebel, unified communications, Compellent, Glassfish, Sun, Objective-C, and IBM’s Infosphere Data Stage.         

The top-paying location is Silicon Valley, where average wages for tech jobs rose 3.2 percent to $118,306. Other cities where average wages are between $105,000 and $100,000 include Seattle, San Diego, Minneapolis, Boston, Baltimore, Portland, and New York. 

However the best cities, adjusted for the local cost of living, are Minneapolis, Portland, Tampa, Charlotte, and Seattle. 

dicetechskills2019.jpg

Dice finds that programmers using Google-developed Go, or Golang, earned the most on average at $132,827.


Image: Dice

Previous and related coverage

Top programming languages to learn in 2019? Developers name their favorites

Software developers reveal which languages are their top priorities for 2019.

The programming languages and skills that pay the best in 2019 TechRepublic

The 10 programming languages associated with the highest-paying jobs all earned developers an average salary above $100,000.

Is Julia fastest-growing new programming language? Stats chart rapid rise in 2018

Company founded by Julia’s four creators issues figures to show how the open-source language gained momentum in 2018.

Programming language Julia is gaining on Python

A young programming language for machine learning is on the rise and could be soon gunning for Python.

Programming language of the year? Python is standout in latest rankings

Python consolidates its place as a long-term top-three programming language.

Python now a top-3 programming language as Julia’s rise speeds up

The MIT-created Julia programming language continues its ascent in developer popularity.

Which programming languages are most popular (and what does that even mean)?

Popularity may not be a single vector answer, but students and professionals still want to know if they’re guiding their careers and companies in the right direction.

Possible Python rival? Programming language Julia is winning over developers

A young programming language for machine learning is on the rise and could be soon gunning for Python.

Python’s rise: Could it soon edge out C++ in programming language popularity?

Python climbs up TIOBE’s search engine-based index of programming language popularity.

Microsoft readies Python, Java support for its bot-building framework

Microsoft may be ready to rev up (again) its conversation as a service strategy, with new additions to its bot-framework toolset.

Is Julia the next big programming language? MIT thinks so, as version 1.0 lands TechRepublic

Released in 2012, Julia is designed to combine the speed of C with the usability of Python, the dynamism of Ruby, the mathematical prowess of MatLab, and the statistical chops of R.

Mozilla’s radical open-source move helped rewrite rules of tech CNET

A gamble 20 years ago unleashed the source code for the browser that became Firefox. The approach is now core to Facebook, Google and everyone else.

Let’s block ads! (Why?)

Link to original source

ZaReason Gamerbox 9400: The ultimate Linux gaming PC

Featured stories

A few years back, Gabe Newell, Valve‘s CEO, said, “Linux is the future of gaming.” Well, that didn’t happen, but Valve, creator of the Steam game engine and network, is renewing its push for Linux games. So, it makes good sense that ZaReason, a Linux computer manufacturer, has released a top-of-the-line gaming PC: The ZaReason Gamerbox 9400.

Also: How Fortnite approaches analytics, cloud 

ZaReason CEO Cathy Malmrose said the GameBox 9400 was only the start. “Our current team is mostly gamers so, not surprisingly, that is the direction we are going. We have a full line of gaming machines in R&D,” Malmrose said.

The base Gamerbox runs Ubuntu 18.04 Linux. It comes with a 64-bit Pentium 3.8Ghz G5500 Coffee Lake processor. For RAM, it comes with 8GB of DDR4 memory. It’s built on top of the Gigabyte Z370P D3 motherboard.

This tower PC comes with a boot 120GB solid-state drive (SSD)  and a 1TB 7,200 RPM hard drive. That’s more than fast enough for today’s games.

gamerbox-9400.jpg

If you need more interior storage, ZaReason has you covered. It comes with six Gigabits per second (Gbps) SATA ports, three PCI-Express 3.0/2.0 ports, and a single NVMe M.2 Slot

For a net connection, it uses Gigabit Ethernet. The Gamerbox also comes with a pair of USB 2 ports, two USB 3.0 ports, and four USB 3.1 ports with a maximum data transfer rate of 10Gbps.

For graphics, the base level Gamerbox 940 has a NVIDIA GTX 1050. This budget Pascal-generation graphics card is still more than fast enough to run any game you’d care to play. With the new NVIDIA 418.30 driver, FreeSync is finally enabled on Linux. With it, the video card can your DisplayPort monitor’s refresh rate. The result is a tear-free, gaming-focused display that’s finally supported on Linux.

This is plenty of machine at an affordable price of $799. But that’s not what I reviewed.

No, I got “Take no prisoners” ZaReason Gamerbox 9400.


Must read


This model came with a water-cooled 3.7GHz Intel i7-8700K, which I overclocked to 4.7GHz. Why overclock it? Well, wouldn’t you? I mean the water cooling was just sitting there begging to be used.

Intel, when the chip first released, called it its best gaming chip ever. Now, that was a year and a half ago. Today, I’d say it’s a toss up between the Intel Core i9-9900K and the AMD Ryzen Threadripper 2990WX. That said, the i7-8700K is still a great gaming chip. It’s got more than enough processing power for even the most demanding games.

The powerhouse processor is backed up by 64GB of DDR4 RAM. It also comes with lots of room for games on its 2TB, 7,200RPM hard drive.

Of course, the real gaming goodness comes from its NVIDIA RTX 2080 graphics card. There’s been a lot of talk about how it’s not as fast as it should be and it costs too much. All I can say is with its 256-bit, 944 CUDA cores Turing GPU architecture, a 1,515MHz GPU base clock frequency, and 8GB of GDDR6 video memory, it was more than fast enough for me.

Using the Phoronix Test Suite and focusing on games, I found the amped-up GamerBox averaged over 60fps with 4K displays on such games as WarHammer II, the Steam-based Rise of the Tomb Raider, and the WINE-based Dawn of War. Those are great numbers.

Just for fun, I really enjoyed playing Counter-Strike: Global Offensive, the classic zombie game Left 4 Dead 2, and Bioshock: Infinite. I was also to play the much-hated Fallout 76. Let me be honest, all the reviews you’ve read about Fallout 76 being a bad game? Well, they’re right. But I’m from West Virginia, where Fallout 76 is set, and for me, it was an entertaining visit to my post-apocalypse home state. If you really want to try Fallout 76 on Linux, check out Chris Titus Tech’s excellent YouTube guide and review of Fallout 76.

Now, I’m not a big gamer, but the maxed-out Gamerbox 9400 was the best gaming PC — Windows or Linux — I ever used. For $2,205, it’s what any hardcore Linux gamer would want for their playing pleasure.

Related Stories:

Let’s block ads! (Why?)

Link to original source

Windows 10 updates are broken again, but this time it's not Microsoft's fault

Multiple Windows users in the US and UK have been unable to download updates from Windows Update. But rather than the problem lying at Microsoft’s end, the culprit appears to be various ISPs’ domain name system (DNS) settings. 

Windows 10

Users have also reported the same DNS issue preventing app updates from the Windows Store and breaking Microsoft’s security feature, SmartScreen. 

As spotted by Softpedia, Windows users pinpointed Comcast’s DNS settings as the source of the problem, and, oddly, found that switching the computer’s network settings to use Google’s Public DNS allowed Windows Update to resume.

A user on Comcast’s Xfinity community forum said they were able to get Windows 10 updates after changing the Windows 10 device’s IPV4 and IPV6 DNS lookups from Comcast’s default DNS to Google Public DNS. 

Users there reported the Google Public DNS fix enabled Windows Store apps to update and restored Windows Defender Smartscreen. 

Affected people on a separate Xfinity post about the same issue reported that Comcast appeared to have fixed the issue on Thursday afternoon, although the company has not announced the DNS issue has been resolved.   

For unknown reasons, this issue wasn’t isolated to Comcast, with dozens of Windows 10 users in the UK reporting the same Windows 10 update problems on BT Broadband. 

SEE: 20 pro tips to make Windows 10 work the way you want (free PDF)

A BT forum moderator, Neil, acknowledged the complaints on Thursday evening UK time and said the ISP’s network team has been informed of the issue. The company has yet to report a resolution. 

Affected BT customers were still experiencing the same Windows Update problems on Friday morning. 

As noted by Softpedia’s write-up on the workaround, switching to Google’s public DNS server should be considered a temporary fix until it’s resolved by the ISP, and for that reason it would be wise to backup the original IP address for when that fix arrives. 

Windows 10 users can change the default DNS settings by going to Settings and selecting Network & Internet > Ethernet > Change adapter settings. This will display a field where the user can manually type in Google’s public DNS server IP addresses. 

Previous and related coverage

Microsoft cloud services see global authentication outage

Office 365, Dynamics 365, Azure Government Cloud impacted by authentication issue.

Windows 10 19H1: We’ll fix confusing setup error messages, promises Microsoft

Would the average Windows user know what a KB is or what to do with one?

Windows 7 versus Windows 10: Here comes the final showdown

With less than a year to a major Windows 7 support deadline, it’s decision time for the PC.

Windows 10 19H1: Microsoft pushes its services with ‘Make Windows even better’ prompt 

Microsoft wants you to “make Windows even better” by setting up Microsoft Account services on Windows 10 devices.

Microsoft’s new Windows 10 19H1 test build is taxiing toward the finish line

Microsoft’s latest Windows 10 19H1 (1903) test build is light on new features, and heavy on fixes — which is expected as it rolls toward completion.

How to turn features on and off in Microsoft Windows 10 from the Control Panel TechRepublic 

Microsoft decided to conceal the traditional Control Panel, but you can still access it if you know how.

CES 2019: Everything we saw, from 8K TVs to amazing fake burgers CNET

The show opened with a bombshell from Apple, closed with a surprise from Samsung and had plenty of news in between.

Let’s block ads! (Why?)

Link to original source