Since this is an OLED, you know the picture will be excellent. You get colors that are much brighter and crisper, four HDR formats for blacks and whites, and support for Dolby Atmos for 3D sound. You may actually stop going out to the movies, this TV is that good. There’s also the webOS smart system for all your streaming services.
The biggest addition comes in the form of the AI ThinQ, which LG has been integrating into more of its products. The idea is to turn your TV into the center of your smart home, connecting with other devices and controlling them through the TV. From checking the weather to adjusting the lights, just issue a voice command through the LG Magic Remote without having to pause Avengers: Infinity War. And since it works with Google Assistant and Amazon Alexa devices, you won’t get pigeon-holed into any one AI.
Between the beautiful picture and the smart home capabilities, this is definitely a TV for the modern home. So if this sounds like the perfect addition to your home theater setup, check out Walmart and save big.
You’ll find several different cuts here to fit your style, many in your choice of either soft and warm bamboo fiber, or light and airy micro modal. I own a few of both, and they’re absolutely terrific, especially at these prices. At this time of year, my pick of the bunch would be the micro modal boxer briefs, which are a steal at under $8 per pair.
We’ve seen lots of machine learning systems create strangenewphrases and dreamlike images after being trained on large amounts of data. But a new website lets you do the generating, and the results are just as bizarre as you’d expect:
The web applet, built by researcher Cristóbal Valenzuela, is based on a new paper from another team of researchers. Their machine learning algorithm is called AttnGAN, (Attentional Generative Adversarial Network). It’s meant to improve upon other text-to-image AI by refining images at the word level. For now, the results are closer to surrealist art:
Machine learning, as you probably know by now, is the process researchers use to train algorithms on large datasets, allowing them to solve complex problems like “what is this a picture of?” on their own. These algorithms can also do the opposite, creating new images out of words. The new paper explains that older text-to-image programs formed images using entire sentences, which wasn’t great. Their method instead creates a general image from the entire sentence, then refines the image using the sentence’s sub-parts.
The researchers trained the network on the COCO, or Common Objects in Context dataset. It’s a good reference source for images of common objects, like stop signs, animals, and… Modest Mouse lyrics.
Valenzuela’s tool excelled at creating the stuff of fever dreams in response to Gizmodo staffers’ twisted requests. Our own Hudson Hongo got especially good at getting the images he wanted.
Unsurprisingly, Janelle Shane’s AI Weirdness blog is where we found out about AttnGAN, so we asked her what it says about the current state of AI.
“This demo is a really interesting way of showing how much a state of the art image recognition algorithm understands about image and text,” she told Gizmodo. “What does it understand about what ‘dog’ means? Or ‘human?’” But she noticed that structure is difficult for these algorithms. “If it sees a human arm pointing toward it vs to the side, it looks really different in a 2D image.”
Shane also pointed out that the algorithm drew birds really well when it only needed to draw birds, but things got worse as more became expected of it—the version of AttnGAN on Valenzuela’s site tries to draw whatever a user types in. She compared it to self-driving cars, who have many more tasks they need to do and obstacles they need recognize.
Gizmodo reached out to the study’s first author, Ph.D student Tao Xu at Lehigh University, and will update the post when we hear back.
But please, have fun with this one and show us your worst in the comments.
As a final thought, these would make really good Dixit cards.
After the Associated Press reported that certain Google apps still track you even if you turned off location history, Google has changed its help pages and tried to clarify the issue. “We have been updating the explanatory language about Location History to make it more consistent and clear across our platforms and help centers,” Google told the AP in a statement.
The AP’s investigation found that with Location History off, Google still stores your coordinates when you open Maps or even do searches, even if they’re not related to where you are. After the report first surfaced, Google effectively denied there was a problem, saying “we provide clear descriptions of these tools.”
Google has now removed the misleading language on the Location History help page. It used to state that “With Location History off, the places you go are no longer stored.” Now, it says:
This setting does not affect other location services on your device, like Google Location Services and Find My Device. Some location data may be saved as part of your activity on other services, like Search and Maps.
That doesn’t solve the ongoing challenge of disabling location services in other apps. It’s a convoluted process, as you need to log into Google, head to your Google account and then select “Manage Your Google Activity.” You then hit “Go To Activity Controls,” and flip the toggle under “Web and App Activity” to pause it.
The GDPR and other laws are designed to make it easy for consumers to understand when they’re being tracked and to easily opt out. The initial Associated Press report also caught the attention of US lawmakers. Senator Mark Warner told the AP that it’s “frustratingly common” for tech giant to “diverge wildly from the totally reasonable expectations of their users.”
Profit is Google’s main incentive to keep user location data, as it helps advertisers better target consumers. The EU Commission recently fined Google €4.34 billion for breaching EU antitrust rules, and potential GDPR fines can amount to 4 percent of a company’s yearly turnover. Following the report, US Rep. Frank Pallone called for “comprehensive consumer privacy and data security legislation” in the US.
As it gears up to move into a new home (a Galaxy Home, to be specific), Bixby is far from ready. Samsung’s digital assistant has become infamous for its tardiness, and even after showing up late to the AI party, Bixby doesn’t have much to show for the extra time. It’s not smarter than the rest and doesn’t offer any new tricks, even in the recently announced Galaxy Home, other than perhaps better sound quality. As much as I’m excited about Samsung potentially giving Amazon, Google and Apple some competition in the smart speaker space, I’m pretty sure they have nothing to worry about, if my time with Bixby on the Note 9 is any indication.
To be clear, Samsung still hasn’t launched the Bixby-powered Galaxy Home speaker, and no one seems to have published an in-depth hands-on with it. The AI I tested on the Note 9 was pre-release software and might still be improved by the time the company’s developer conference rolls around in November, by which time Samsung says it will have more to share. That’s just under three months away — not a lot of time to fix such a broad array of issues. And based on precedent, Bixby’s problems can’t be solved with even a year of work, let alone a few months. It’s as if Samsung put the pie in the oven without turning it on and keeps taking it out to see if it’s cooked. It’s not.
On the surface, Bixby seems to function fine. It can pull up nearby restaurants via Yelp, tell me the day’s weather forecast, get directions to my destinations and play music from Spotify, for example. If you ask it a follow-up question, the assistant understands the context and gives you answers without you having to repeat earlier parameters, making the experience feel more conversational. Its Bixby Vision feature even uses the camera to help you interpret things in the world around you. But its results fall short of the competition.
When I asked Siri, Google Assistant and Bixby for directions to the High Line in Manhattan, both Apple and Google gave me results via their respective maps. Bixby gave me Uber travel times instead, repeatedly asking me what type of Uber I wanted. That’s alright if you already know you want to take a car, but sometimes a girl’s gotta save money and take the train. In this case Bixby’s results only enable bad habits.
When I asked the three assistants to “text Chris Velazco,” the iPhone and Pixel were able to connect me to the right Chris, but Bixby wanted to tell one of the five Chrises in my phone the message “Velazco.” It’s little things like this that make Bixby feel clunky and unhelpful.
These experiences were sometimes made less frustrating thanks to visual cues on the phone, like on-screen answers and suggested follow-up questions. But the upcoming Galaxy Home is a speaker and doesn’t have a display.
In an audio-only format, it’s extremely important for an assistant to consistently understand what you’re saying. But from my experience, Bixby doesn’t truly hear me. I asked Bixby and Google Assistant to remind me to “take a photo of receipt tonight.” Google interpreted what I said correctly, but Bixby thought I said “take a photo of her seat tonight.” I don’t even know what that means.
While Bixby Vision is indeed useful at times, it’s not a feature you can use in a speaker. It’s been two years since Samsung first launched the assistant on the Galaxy S8 and S8 Plus, and it doesn’t appear to have made many meaningful improvements — not enough to convince me that Bixby can effectively underpin a smart speaker, anyway. Samsung has a lot of work to do before it should even think about launching the Galaxy Home this year.
OnePlus may get a boost in the US market with its next phone. CNET reports that according to people familiar with the matter, the OnePlus 6T will be backed by a major US carrier — T-Mobile. While the standard version of the new model will be able to run on AT&T’s and T-Mobile’s networks, as has been the case with previous models, OnePlus will also release a version that’s optimized for T-Mobile. CNET is also reporting an October launch and a $550 price tag, though it notes the price has yet to be finalized.
While OnePlus phones and their lower price points have attracted a following, actually having a carrier partner could garner the company a wider user base in the US. “Getting carrier shelf space is a prerequisite to volume sales in the US,” Avi Greengart, an analyst at Global Data, told CNET.
One of CNET’s sources did note that OnePlus is still in the midst of getting approval by the carrier, meaning launching with T-Mobile isn’t set in stone just yet.
The Next Web has an interesting piece talking about what Jeff Powers refers to as ‘Class 2 smarthomes.’
With today’s tech – Class 1 – we do have things liked timed automations, but a lot of the time we’re controlling things manually. Class 2 smarthomes would, he argues, be truly smart, and figure out a lot more things on their own.
Some of what he proposes would be pretty complex, but there’s one idea in there which Apple could fairly easily implement, and which would make HomeKit a lot friendlier for non-techies …
My desire to power up a laptop with an external graphics card began in 2015, when I set out on a quest to get back into PC gaming—a beloved pastime I’d neglected since childhood.
But the only PC I had at the time was a 2011 Lenovo ThinkPad X220 laptop with Intel HD 3000 integrated graphics. That just won’t cut it for proper PC gaming. Sure, the laptop would work well enough for older titles like Diablo III, especially on the laptop’s tiny 1366×728-resolution display, but forget about more graphics-intensive modern games on an external 1080p monitor. That’s why I decided to examine external graphics card (eGPU) setups.
And indeed, I found entire communities of people creating DIY setups that connected desktop graphics cards to their laptops via ExpressCard or mPCIe slots to play games on an external monitor. It isn’t hard to configure, and using desktop graphics cards with a laptop has become even easier in recent times. The wide availability of Thunderbolt 3 combined with external graphics card docks has simplified the process even more for people with a more modern notebook.
Many do-it-yourselfers using Thunderbolt 3 or going the ExpressCard/mPCIe route end up with a plug-and-play experience requiring little to no modification—though it takes some research first. When it’s done, however, you’ll be left with a console-toppling PC gaming setup for about the same price as a new Xbox One S, depending on which graphics card you choose. That’s far cheaper than building a whole new gaming desktop, and you can still take advantage of your laptop’s portability by disconnecting the eGPU hardware.
We’ll walk you through the DIY process for configuring an external graphics card later in this article, along with the sudden rise of streaming PC games from the cloud. First, let’s tackle the modern approach of using a graphics card dock via Thunderbolt 3.
Thunderbolt 3 graphics card docks
Thunderbolt 3 (TB3) is Intel’s high-speed external input/output connection, capable of speeds up to a blistering 40 gigabytes per second (GBps) over a compatible USB-C port. For resource-intensive activities like gaming, a speedy connection between your laptop and an external graphics card provides a big boost for performance.
Previous attempts at external graphics card docks existed, but they were usually overpriced and relied on proprietary connection technologies. Thunderbolt 3 levels the playing field, and several companies now offer TB3-based graphics card docks, complete with dedicated power supplies, additional ports, and—of course—room to slot desktop graphics cards.
All is not perfect in the world of Thunderbolt 3-powered graphics, however. Enclosures are, for the most part, still a pricey proposition—much more so than the DIY method we’ll outline later. You’ll also need a relatively new notebook equipped with a Thunderbolt 3-compatible USB-C port. These days most Thunderbolt 3 laptops and graphics card enclosures play nicely together thanks to Intel’s Thunderbolt 3 external graphics compatibility technology, which PC makers must specifically enable.
If you’re in the market for a new laptop compatible with an external graphics card dock, some good choices at this writing include the HP Spectre x360 and the latest Dell XPS 13. Still, Nando, an eGPU expert who is an administrator at eGPU.io, recommends researching your desired laptop model for compatibility with graphics card enclosures before buying just to be sure.
Once you’ve got your laptop sorted out it’s time to decide which graphics card dock to buy. We can’t cover all possible enclosures here, as virtually every major PC graphics card vendor is rolling out a graphics dock of its own, but we’ll look at some of the major products introduced in recent months.
Razer Core and Core X
When we looked at the original Razer Core it was the first major TB3 enclosure to make a splash, ostensibly designed for Razer Blade laptops but able to work with any compatible TB3 system. It was also priced at a whopping $500. Since then, the Razer Core has split into two different models: the Core V2 and the Core X.
The Core V2 costs the same $500 as the original. That’s still far more than most other external graphics docks, but this luxurious model sports four USB 3.0 ports for gaming peripherals, a 500-watt internal power supply, and ethernet. It’s also nice to look at with a CNC-machined aluminum exterior and Razer’s Chroma RGB lighting system.
The Core X has a less costly $300 price tag and a 650W power supply, but it lacks USB and ethernet ports. It also lacks the custom-built finery of the Core V2 has since it relies almost entirely on off-the-shelf components. The Core X fits larger cards up to three slots wide. We haven’t reviewed it, but Macworld loved it.
PowerColor Gaming Station
PowerColor’s Thunderbolt 3-based Devil Box was a similarly fancy box that sold for $450 in the early days of external graphics docks. It’s still listed on PowerColor’s site, but it isn’t easy to find. PowerColor’s preferred enclosure is the simply named Gaming Station ($300 on NeweggRemove non-product link). The newer box is rocking a 550 watt power supply, ethernet, and five USB 3.0 ports.
Akitio has gone all-in on external graphics card docks by offering not one but three models: the Node, Node Lite, and Node Pro. A key difference between most of Akitio’s products and the other graphics card enclosures we’ve seen is that, with the exception of the original Node, Akitio’s are not certified by Intel as external graphics (eGFX) peripherals. Instead, they’re general purpose PCIe boxes. We won’t get into the distinction here, but you can read about it on Intel’s Thunderbolt blog.
The original Node packs a 400W power supply and costs $260 on Amazon. It doesn’t offer any extra ports for connecting peripherals, but the enclosure’s lower-priced sibling does. The Node Lite is currently priced around $200 on Amazon. It’s a PCIe-certified box with a DisplayPort port and an extra Thunderbolt 3 port for peripherals, but you’ll need to bring your own power supply as well. Both docks support half-length, full-height, and double-width cards.
Finally, the Akitio Node Pro (currently priced at $340 on Amazon) also has a DisplayPort input and a second Thunderbolt port, a beefier 500W power supply, and a handy retractable lunch box handle if you want to take your graphics dock on the road.
Meal delivery service DoorDash raised $250 million in a new round of funding co-led by Coatue Management and DST Global, valuing the company at $4 billion.
DoorDash, whose investors include Japanese holding conglomerate SoftBank, Sequoia Capital and Charles River Ventures, was founded in 2013 by Stanford students Andy Fang, Stanley Tang, Tony Xu and Evan Moore.
The San Francisco-based company operates alongside GrubHub, Delivery.com, Postmates, Uber Eats and several startups in a highly competitive food delivery retail space that attempts to lure customers with discounts and other promotions.
Analysts are optimistic over Nvidia’s growth due to a new product cycle even as a key market falters.
div > div.group > p:first-child”>
Nvidia shares are down 2.9 percent Friday, a day after it reported better-than-expected fiscal second-quarter earnings Thursday. The company gave sales guidance slightly lower than the Wall Street consensus for the fiscal third quarter and warned about future cryptocurrency-mining revenue.
“Whereas we had previously anticipated cryptocurrency to be meaningful for the year, we are now projecting no contributions going forward,” Nvidia chief financial officer Colette Kress said in a release Thursday.
Cryptocurrency miners use graphics cards based on AMD‘s and Nvidia’s chips to “mine” new coins, which can then be sold or held for future appreciation. The price of ethereum is down nearly 60 percent this year, crimping demand for cryptomining cards.
Bank of America Merrill Lynch reiterated its buy rating for Nvidia’s stock, expressing confidence in the company’s product pipeline.
“While the [guidance] headline miss is likely to pressure the stock [near-term], we reiterate Buy since we believe Q3 contains very limited benefits from NVDA’s next gen Turing architecture, which is likely to show up starting in Q4 and into 2019,” analyst Vivek Arya said in a note to clients Thursday. “We believe Turing and its ray-tracing capabilities and 10x inferencing benefits will have a pervasive impact across segments, and stimulate new markets in pro visualization.”
The analyst reaffirmed his $340 price target for Nvidia, representing 32 percent upside to Thursday’s close.
In similar fashion, Jefferies is confident about Nvidia’s new eighth-generation Turing graphics architecture, which was announced Monday at a conference in Vancouver.
“We are buyers in front of NVDA’s next gen GPU platform, ‘Turing,’ which launches in 3Q,” analyst Mark Lipacis said in a note Friday. “We expect Turing will be a revenue tailwind for NVDA over the next 12-18 months.”
Lipacis noted that Nvidia’s gaming business sales historically rose by 40 percent to 50 percent in the year after it launched major new chip technology. He reiterated his buy rating and $320 price target for Nvidia shares.
Cowen told its clients to overlook any short-term transitional hiccups into the Turing gaming card launch. Analysts expect gaming cards based on the new graphics technology will be announced shortly.
“The transitional quarter we expected appears a bit sharper than feared, with a zero-ing of crypto and gaming channel management impacting guidance as tables are set for the Turing gaming launch,” analyst Matthew Ramsay said in a note to clients Friday. “Our run-rate earnings power and thesis are 100% unchanged. With expectations reset, we see a very favorable set up and would be buyers.”
Ramsay reaffirmed his outperform rating and lowered his price target to $320 from $325 for Nvidia shares.
Nvidia shares are up 33 percent so far this year through Thursday versus the S&P 500’s 6 percent return.