LightBlog

mardi 17 décembre 2019

OnePlus will show off a concept product at CES 2020

OnePlus is gearing up for an eventful 2020. Everyone, of course, expects the Chinese company to launch the next OnePlus flagship pair; and we have already seen leaks of the OnePlus 8 and OnePlus 8 Pro, giving us a fair idea of what to expect. What really surprised everyone was the renders of the OnePlus 8 Lite, a phone that is rumored to be OnePlus’ first mid-range smartphone in more than four years. OnePlus has also been teasing everyone with its presence at CES 2020, and now, the company is teasing the OnePlus Concept One.

OnePlus’ CEO first put out a teaser a few days ago, but other than the company’s presence at CES 2020, the teaser did not reveal much. Now, OnePlus’ official Weibo has revealed the name “OnePlus Concept One”.

OnePlus Concept One

The first guess for the teaser would be a concept smartphone, but owing to the tribute the name pays to the OnePlus One, the company’s first smartphone. But the teaser does not actually mention “phone” anywhere, so our guess remains a guess at best. The Chinese text within the teaser translates into “Variable Design, Variable Future”, which is still very vague and does not shed any light on the exact specifics of the announcement.

We’ll have to wait for CES 2020 to know exactly what OnePlus has planned and what the OnePlus Concept One is. There’s also a possibility that we get some more teasers as a run-up to the announcement — but since this is a concept that is unlikely to be marketed as a consumer product, the announcement itself can be equated with a teaser. We’ll find out soon enough.


Source: Weibo

The post OnePlus will show off a concept product at CES 2020 appeared first on xda-developers.



from xda-developers https://ift.tt/38LI3xP
via IFTTT

lundi 16 décembre 2019

Google Calendar finally begins testing integration with Google Tasks

Google launched a standalone app for Google Tasks way back in April last year. And since then, users have requested seamless Tasks integration in Google Calendar. Google has been working on the Tasks integration for a while now and we spotted the feature in one of our recent teardowns. The APK teardown revealed strings of code highlighting the upcoming functionality and mentioned a new task button, prompts to create repeating tasks, and task descriptions among other things. It seems like the feature is now close to release as it can now be manually triggered in the latest release of Google Calendar.

Google Calendar Google Calendar Google Calendar Google Calendar Google Calendar

Our Editor-in-Chief Mishaal Rahman has managed to manually enable the new Google Tasks integration in version 2019.47.2-284533606-release of the Google Calendar app. As you can see in the screenshots above, the app now has a new Task button which allows you to quickly add a task just like you would add a reminder or a goal. You just need to tap on the new task button, select a date, select your task, choose if you wish the task to repeat or not and you’re done.

The Tasks you add in the Google Calendar app appear just like your reminders in the calendar view. To help you easily differentiate between tasks and other calendar entries, the app also lets you select a different accent color for tasks and set up different notification sounds.  The Google Tasks notification looks like any other notification pushed from the Google Calendar and you can easily dismiss it by tapping on Done.

Google Calendar Google Calendar

What’s really great about the Tasks integration is that most of the aforementioned features work even if you don’t have the Google Tasks app on your phone. However, if you wish to click on any tasks that you create (either in the calendar view or on a notification), tap on the overflow menu (as seen in the screenshot above), or tap on the “View in Tasks” action, you’ll need the Tasks app on your phone. Considering that Tasks integration has finally reached the testing phase, it shouldn’t be long before Google officially rolls it out to users.

The post Google Calendar finally begins testing integration with Google Tasks appeared first on xda-developers.



from xda-developers https://ift.tt/2YZi69p
via IFTTT

Google is testing a dedicated media player app for Chrome OS

This may come as some surprise to you, but Chrome OS doesn’t currently have a dedicated media player app. Viewing photos and videos is done through the Files apps, which works fine for the most part. It appears Google is working on a media player app for Chrome OS and it will be a System Web App (SWA).

The latest version of Chrome OS Canary has a rudimentary media app that can be accessed from chrome://media-app along with a shortcut in the launcher. Right now, the app consists of a simple dialog that says you can “Drag and drop a file or select open” and an “OPEN” button. Videos play just fine in the app, though there aren’t many controls. Photos appear with a few basic image editing options. There’s not much to it.

As mentioned, this is a System Web App just like the existing Settings app. That means you can access it from a URL in the regular browser window, but it also comes with an app icon in the launcher. Google has also been working on a System Web App version of the Camera, which can be enabled in the Canary version of Chrome OS with the flag #camera-system-web-app. Google may be in the process of converting all Chrome OS apps to System Web Apps.


Source: ChromeStory 

The post Google is testing a dedicated media player app for Chrome OS appeared first on xda-developers.



from xda-developers https://ift.tt/2M0sl8k
via IFTTT

Stable Android 10 (One UI 2) begins rolling out to the Galaxy S10 in the US

Samsung kicked off the One UI 2 beta (Android 10) for the Galaxy S10 series back in October. Late last month, the stable update started rolling out to beta testers in Germany. Today, the company has started rolling out the stable version of One UI 2 to Galaxy S10, S10+, and S10e beta testers one month early.

Samsung Galaxy S10 XDA Forum || Samsung Galaxy S10+ XDA Forum || Samsung Galaxy S10e XDA Forum

The update was announced in the beat app first and has since been confirmed by T-Mobile. S10 device owners on T-Mobile, Sprint, and Xfinity Mobile have reported receiving the update already, and some users in Canada as well. The build numbers of the entire family below:

  • Galaxy S10 (G973USQU2CSKP)
  • Galaxy S10+ (G975USQU2CSKP)
  • Galaxy S10e (G970USQU2CSKP)

The One UI 2 update comes in around 2.4GB in size and includes the December security patch. As Samsung’s announcement says, this update is rolling out to beta testers before it officially arrives for others. So if you have been enrolled in the One UI 2 beta program on your Galaxy S10, you can download stable Android 10 before everyone else. It should be a matter of time before the update starts appearing on AT&T, Verizon, and more Canadian devices.


Source: Reddit | Via: Droid-Life

The post Stable Android 10 (One UI 2) begins rolling out to the Galaxy S10 in the US appeared first on xda-developers.



from xda-developers https://ift.tt/38J3cJ1
via IFTTT

ColorOS 7 Review: A fresh new look makes this one of the most compelling user interfaces

ColorOS is OPPO’s custom user interface on top of Android. At first, its functionality was tailored to users in China, which remains the company’s biggest market. Not unlike other user interfaces from Chinese OEMs, it borrowed inspiration from the design elements of iOS, while offering greater functionality than stock Android. However, OPPO’s expansion in international markets such as the Indian subcontinent, Southeast Asia, and Europe meant that ColorOS needed to evolve to keep up with the times. The iOS-inspired design elements were now a liability rather than an asset, as they conflicted with stock Android’s design language. To rectify this, OPPO released ColorOS 6, based on Android 9 Pie, in March this year. ColorOS 6 was a good improvement over previous versions, but it still had some functionality drawbacks and aesthetic issues that prevented it from being regarded as one of the better full-featured custom user interface.

OPPO, however, hasn’t given up on improving its software. Android 10 was released in September, and although OPPO wasn’t the quickest device manufacturer to roll out the update to its phones, the company offered a detailed roll-out schedule for the next version of its custom UI, which would be called ColorOS 7. At the end of October, OPPO started rolling out ColorOS 6.7 for the first-generation OPPO Reno. ColorOS 6.7 was very similar to ColorOS 7 (as they’re both Android 10-based), and it saw limited availability as it was only available for one phone. We did an in-depth review of ColorOS 6.7, which also functions as a review of ColorOS as a whole.

OPPO held a separate event for the international launch of ColorOS 7 on November 26 in New Delhi, India, after launching it for the Chinese market on November 20. The event’s location showed that OPPO was focusing on the Indian market, which is no surprise considering that India is the world’s second largest smartphone market. The ColorOS 7 upgrade adoption plan is said to be the largest update plan ever for ColorOS.

The ColorOS 7 update is now being rolled out as a trial version upgrade for users of the OPPO Reno 10x Zoom and OPPO Reno. Users of the OPPO Reno 2 will get the update before the end of the year (as will F11, F11 Pro, and F11 Pro Marvel’s Avengers Limited Edition, and other OPPO phones will receive it in batches. The full roll-out schedule can be read here.

Our ColorOS 6.7 review covers much of what is new in ColorOS 7, so readers are invited to check that out. This article will attempt to cover the ColorOS 7 functionality that was not covered in the older review, such as Smart Assistant, Doc Vault, the one-hand friendly modal UI, and new system sounds, wallpapers, and an all-new icon design. In essence, this review is an addendum. Without any further ado, let’s delve right into ColorOS 7.

About this review: This review was based on three weeks of usage of ColorOS 7 on the OPPO Reno 10x Zoom, which was loaned to XDA by OPPO.


The Good

  • More minimalist UI
  • Full support for Android’s notification features
  • ColorOS 7 has a better dark mode implementation than Google’s stock Android
  • Rich feature-set that can go head-to-head with top user Android custom interfaces

The Bad

  • ColorOS 7 has a lot of bloatware, including region-specific bloatware for India

ColorOS 7 is based on Android 10

The latest version of OPPO’s user interface is, as expected, based on the latest version of Android 10. That means users will get all the expected Android 10 features: full-screen navigation gestures, dark mode, more granular permission management, and a mandatory Digital Wellbeing solution (in ColorOS 7, OPPO uses Google’s Digital Wellbeing implementation instead of developing a custom solution). Android 10 in itself is a solid upgrade over Android 9, and it’s good to see that every single flagship feature of it is retained in ColorOS 7.


ColorOS 7’s design is much improved from ColorOS 6.0

ColorOS 7 home screen ColorOS 7 notification center ColorOS 7 control center ColorOS 7 recent apps

OPPO’s ColorOS 7 comes with a new user interface that is a breath of fresh air, as it doesn’t have the blur-focused user interface that so many China-based user interfaces have. Instead, its UI is starkly 2D, starkly minimalist. We went into greater detail in our ColorOS 6.7 review, but suffice it to say that this is not the ColorOS of old. Let’s take one example. The bright contrasting colors in the Control Center (quick settings menu) have vanished. OPPO now uses a single shade of green, and while it may sound insignificant, it makes a big difference when considering the fact that users make use of the Control Center multiple times everyday. Such examples of minimalism are found throughout the UI. The Recent Apps menu, the calling screen, and applications such as the dialer benefit from the lack of visual clutter.

ColorOS 7 has a brand new icon design, featuring rounded squares. The icons themselves are nicer-looking than the those of ColorOS 6.0, as the color scheme is more pleasing to the user’s eyes. OPPO says that the new icon design works with hundreds of third-party apps, ensuring visual consistency. In my usage, all of my third-party apps did adapt seamlessly to OPPO’s rounded square icon design.

The system comes with three new minimalist abstract wallpapers on top of the ColorOS 6 wallpaper collection. Additions include the Hawa Hamal live wallpaper, which is an example of a localized theme. The Artist Wallpaper Project helps users design wallpapers of their own.

ColorOS 7 brings an improved sound system including new ringtones, notification sounds, and alarm sounds. Again, this doesn’t sound a big improvement, but the new default notification sounds are quite good. It’s a small improvement that will be felt everyday, so it’s good to see OPPO nailing the basics.

One of the biggest improvements that ColorOS 7 brings is the modal page that helps one-handed usability of the UI. Samsung popularized this with One UI, and OPPO has now brought its own unique implementation. The toggles in the Control Center are placed lower, which is one thing. The modal page is used in system apps such as Clock, Contacts, and Messaging, and it’s placed on the lower half of the display. This proves beneficial when using phones with big displays, such as the OPPO Reno 10x Zoom, which has a 6.6-inch 19:5:9 display.


ColorOS 7 brings useful functionality on top of Android 10

ColorOS 7 is one of the most full-featured custom user interfaces out there. Its dark mode implementation is better than that of stock Android 10, as explained in our ColorOS 6.7 review. Dark mode in ColorOS 7 is adaptive, with support for hundreds of third-party apps, including the top 200 apps for users. The enhanced three-finger screenshot feature is another great example. Most custom user interfaces feature a gesture to take a screenshot by swiping down with three fingers, but ColorOS 7 lets users take a long screenshot or a short one by defining the screenshot area, which is not found in any other other custom user interface. Screenshots can also be edited instantly after taking them.

ColorOS 7 Do Not Disturb settings ColorOS 7 smart assistant ColorOS 7 personal information protection

Features such as Riding Mode, Smart Assistant, and Doc Vault have been customized for international markets (once again, with a particular focus on India). Riding Mode is a specialized do-not-disturb mode for cyclists/motorcyclists, as it allows calls from only specified contacts and silences other notifications. Doc Vault, on the other hand, consists of a partnership of ColorOS with Indian digital document issuing platform DigitalLocker, which allows users to access digital versions of official documents and certificates straight from their phones. This can be used to speed up the ID verification process in places like airports or hotels, for example. It should be noted that we were unable to test out this feature, given that it is not present in the current ColorOS 7 trial version on the OPPO Reno 10x Zoom.

Smart Assistant is OPPO’s version of the customized left-hand panel on the home screen of the system launcher. OnePlus has its Shelf feature, while Xiaomi has App Vault. However, OPPO’s Smart Assistant is more feature-rich than its two competitors. OPPO describes it as a “handy information platform” that lets users view their step count, manage events, track packages, download popular apps, and more, all in a single place.  Smart Assistant’s quick functions lets the user access Google Search, scan documents and cards, and translate text in photos. Users can follow matches in popular games, and popular apps can be downloaded. The weather information can be shown in the Smart Assistant, and users can also choose to enable a favorite contacts widget on the assistant for quick dialing.

The privacy protection features that ColorOS offers is a significant differentiating factor for the custom UI. We have explained the innovative Personal Information Leakage Protection feature in detail in our ColorOS 6.7 review, and the feature won’t be found in any other custom UI for now, although it’s disabled by default. OPPO specifically promotes that unlike most custom user interfaces, ColorOS 7 allows users to decline an app permission requirement while still being able to use the app because of having the option to send blank contact information. The focus on privacy is welcome as ColorOS has 300 million active users according to OPPO. The OS does prioritize user privacy and security.

Private Safe is another example of a privacy-focused feature. It keeps important private files safe by transferring them to a storage folder, where they can’t be accessed, read, or modified by other applications. This requires a privacy protection password – pattern unlock is not eligible here.

ColorOS 7 camera app night modeIn terms of imaging additions, the camera app of ColorOS 7 is visually similar to that of ColorOS 6.0, but comes with functional improvements. Specifically, it has a new Ultra Night Mode. This does prove its worth in the OPPO Reno 10x Zoom by improving image quality to the point where the night mode is a serious competitor for Samsung’s night mode on the Samsung Galaxy S10, for example. Ultra Night Mode is said to optimize the clarity, brightness, and color of photos taken at night through multi-frame HDR and “smart AI algorithms”. The optimized post processing algorithm is also said to reduce image processing time, and therefore, photos in this mode can be generated in 2.5 seconds, improved from the 4-5 second wait time of the older night mode found in ColorOS 6.0.

Apart from this, we have AI Beautification 2.0 (that is thankfully disabled by default), smart AI noise cancellation, and the bokeh effect can now be applied in portraits as well as videos. Smart AI noise cancellation is able to repair pixel-level defects by anticipating noise points to make the sure the photos won’t be grainy and noisy, according to OPPO.

Finally, OPPO also includes its own video editor in ColorOS 7, named Soloop. This is a basic video editor that does the job for casual editing with respect to adding filters and effects, but advanced users will want to head to the Google Play Store and download a third-party app.

In terms of performance improvements, ColorOS 7 doesn’t leave users wanting more. Cache Preload is said to improve app cold starts by 25%. oSense, on the other hand, is said to be a scheduling mechanism that gives priority to front-end and user-related threads to optimize touch response and frame rates. Similarly, oMem is a priority management solution that allocates higher priority to the most frequently used apps. UFS+ is a System Anti-Aging solution, but further details were not given on how the feature works.

Game Space and Game Assistant are OPPO’s implementation of the gaming mode feature that has found its way in most custom user interfaces in 2019. Game Space enables users to manage and quick-launch games, while Game Assistant provides an autoplay feature and a customizable split screen mode. Do Not Disturb is included in Game Assistant, and users can choose to reject incoming calls as well. OPPO says that its oSense tech solution improves touch response by ~21% and frame rates by ~38%.


Conclusion

OPPO has gone from strength to strength in 2019. Its hardware this year has been defined by shark fin popup cameras, the innovative 5x optical zoom periscope camera module on the OPPO Reno 10x Zoom, 65W SuperVOOC 2.0 charging on the OPPO Reno Ace, and other futuristic features. We know that in Q1 2020, the company will announce the OPPO Find X2 with 5G support, the Qualcomm Snapdragon 865, a camera with an innovative autofocus solution, and more.

OPPO’s hardware has proved itself to be a differentiating factor; the software needed to keep up.

With ColorOS 7, OPPO has achieved that. Does it have annoyances and small usability issues? Yes, it does. On the other hand, however, it also has unique features that the competition doesn’t have an answer for, at least as of now. In its latest iteration, ColorOS is now an asset for OPPO phones, which can only be a good thing. We are excited to observe OPPO’s hardware and software development efforts in 2020.

We thank OPPO for sponsoring XDA. OPPO had minimal involvement in the creation of the content within this article. In particular, they were consulted for fact-checking. Any opinions expressed are those of the author. Our sponsors help us pay for the many costs associated with running XDA, including servers, developers, writers, and more. While you may see sponsored content alongside Portal content, all of it will be clearly labelled as such. The XDA Portal team will not compromise journalistic integrity by accepting money to write favorably about a company. Our opinion cannot be bought. Sponsored content, advertising, and the XDA Depot are managed by a separate team.

The post ColorOS 7 Review: A fresh new look makes this one of the most compelling user interfaces appeared first on xda-developers.



from xda-developers https://ift.tt/2PQbrKv
via IFTTT

How Qualcomm Brought Tremendous Improvements in AI Performance to the Snapdragon 865

It seems like we can’t go a day without seeing “artificial intelligence” in the news, and this past week was no exception in no small part thanks to the Snapdragon Tech Summit. Every year, Qualcomm unveils the plethora of improvements it brings to its Hexagon DSP and the Qualcomm AI Engine, a term they use for their entire heterogeneous compute platform – CPU, GPU, and DSP – when talking about AI workloads. A few years ago, Qualcomm’s insistence on moving the conversation away from traditional talking points, such as year-on-year CPU performance improvements, seemed a bit odd. Yet in 2019 and with the Snapdragon 865, we see that heterogeneous computing is indeed at the helm of their mobile computing push, as AI and hardware-accelerated workloads seem to sneak their way into a breadth of use cases and applications, from social media to everyday services.

The Snapdragon 865 is bringing Qualcomm’s 5th generation AI engine, and with it come juicy improvements in performance and power efficiency — but that’s to be expected. In a sea of specifications, performance figures, fancy engineering terms, and tiresome marketing buzzwords, it’s easy to lose sight of what these improvements actually mean. What do they describe? Why are these upgrades so meaningful to those implementing AI in their apps today, and perhaps more importantly, to those looking to do so in the future?

In this article, we’ll take an approachable yet thorough tour of the Qualcomm AI Engine combing through its history, its components and the Snapdragon 865’s upgrades, and most importantly, why or how each of these have contributed to today’s smartphone experience, from funny filters to digital assistants.

The Hexagon DSP and Qualcomm AI Engine: When branding makes a difference

While I wasn’t able to attend this week’s Snapdragon Tech Summit, I have nonetheless attended every other one since 2015. If you recall, that was the year of the hot mess that was the Snapdragon 810, and so journalists at that Chelsea loft in New York City were eager to find out how the Snapdragon 820 would redeem the company. And it was a great chipset, alright: It promised healthy performance improvements (with none of the throttling) by going back to the then-tried-and-true custom cores Qualcomm was known for. Yet I also remember a very subtle announcement that, in retrospect, ought to have received more attention: the second generation Hexagon 680 DSP and its single instruction, multiple data (SIMD) Hexagon Vector eXtensions, or HVX. Perhaps if engineers hadn’t named the feature, it would have received the attention it deserved.

This coprocessor allows the scalar DSP unit’s hardware threads to access HVX “contexts” (register files) for wide vector processing capabilities. It enabled the offloading of significant compute workloads from the power-hungry CPU or GPU to the power-efficient DSP so that imaging and computer vision tasks would run at substantially improved performance per milliwatt. They are perfect for applying identical operations on contiguous vector elements (originally just integers), making them a good fit for computer vision workloads. We’ve written an in-depth article on the DSP and HVX in the past, noting that the HVX architecture lends itself well to parallelization and, obviously, processing large input vectors. At the time, Qualcomm promoted both the DSP and HVX almost exclusively by describing the improvements they would bring to computer vision workloads such as the Harris corner detector and other sliding window methods.

It wasn’t until the advent of deep learning in consumer mobile applications that the DSP, its vector processing units (and now, a tensor accelerator) would get married to AI and neural networks, in particular. But looking back, it makes perfect sense: The digital signal processor (DSP) architecture, originally designed for handling digitized real-world or analog signal inputs, lends itself to many of the same workloads as many machine learning algorithms and neural networks. For example, DSPs are tailored for filter kernels, convolution and correlation operations, 8-bit calculations, a ton of linear algebra (vector and matrix products) and multiply-accumulate (MAC) operations, all most efficient when parallelized. A neural network’s runtime is also highly dependent on multiplying large vectors, matrices and/or tensors, so it’s only natural that the DSP’s performance advantages neatly translate to neural network architectures as well. We will revisit this topic in short!

In subsequent years, Qualcomm continued to emphasize that they offer not just chipsets, but mobile platforms, and that they focus not just on improving particular components, but delivering “heterogeneous” compute. In 2017, they released their Snapdragon Neural Processing Engine SDK (for runtime acceleration) on the Qualcomm Developer Network, and in early 2018 they announced the Qualcomm Artificial Intelligence Engine to consolidate their several AI-capable hardware (CPU, GPU, DSP) and software components under a single name. With this useful nomenclature, they were able to neatly advertise their AI performance improvements on both the Snapdragon 855 and Snapdragon 865, being able to comfortably spell out the number of trillions of operations per second (TOPS) and year-on-year percentage improvements. Harnessing the generational improvements in CPU, GPU, and DSP – all of which see their own AI-focused upgrades – the company is able to post impressive benchmarks against competitors, which we’ll go over shortly. With the company’s recent marketing efforts and unified, consistent messaging on heterogeneous computing, their AI branding is finally gaining traction among journalists and tech enthusiasts.

Demystifying Neural Networks: A mundane pile of linear algebra

To disentangle a lot of jargon we’ll come across later in the article, we need a short primer on what a neural network is and what you need to make it faster. I want to very briefly go over some of the mathematical underpinnings of neural networks, avoiding as much jargon and notation as possible. The purpose of this section is simply to identify what a neural network is doing, fundamentally: the arithmetic operations it executes, rather than the theoretical basis that justifies said operations (that is far more complicated!). Feel free to proceed to the next section if you want to jump straight to the Qualcomm AI Engine upgrades.

“Vector math is the foundation of deep learning.” – Travis Lanier, Senior Director of Product Management at Qualcomm at the 2017 Snapdragon Tech Summit

Below you will find a very typical feedforward fully-connected neural network diagram. In reality, the diagram makes the whole process look a bit more complicated than it is (at least, until you get used to it). We will compute a forward pass, which is ultimately what a network is doing whenever it produces an inference, a term we’ll encounter later in the article as well. At the moment, we will only concern ourselves with the machine and its parts, with brief explanations of each component.

A neural network consists of sequential layers, each comprised of several “neurons” (depicted as circles in the diagram) connected by weights (depicted as lines in the diagram). In general terms, there are three kinds of layers: the input layer, which takes the raw input; hidden layers, which compute mathematical operations from the previous layer, and the output layer, which provides the final predictions. In this case, we have only one hidden layer, with three hidden units. The input consists of a vector, array, or list of numbers of a particular dimension or length. In the example, we will have a two-dimensional input, let’s say [1.0, -1.0]. Here, the output of the network consists of a scalar or single number (not a list). Each hidden unit is associated with a set of weights and a bias term, shown alongside and below each node. To calculate the weighted sum output of a unit, each weight is multiplied with each corresponding input, and then the products are added together. Then, we will simply add the bias term to that sum of products, resulting in the output of the neuron. For example, with our input of [1.0,-1.0], the first hidden unit will have an output of 1.0*0.3 + (-1.0) * 0.2 + 1.0 = 1.1. Simple, right?

The next step in the diagram represents an activation function, and is what will allow us to produce the output vector of each hidden layer. In our case, we will be using the very popular and extremely simple rectified linear unit or ReLU, which will take an input number and output either (i) zero, if that number is negative or zero (ii) the input number itself, if the number is positive. For example, ReLU(-0.1) = 0, but ReLU(0.1) = 0.1. Following the example of our input as it propagates through that first hidden unit, the output of 1.1 that we computed would be passed into the activation function, yielding ReLU(1.1)=1.1. The output layer, in this example, will function just like a hidden unit: it will multiply the hidden units’ outputs against its weights, and then add its bias term of 0.2. The last activation function, the step function, will turn positive inputs into 1 and negative values into 0. Knowing how each of the operations in the network operates, we can write down the complete computation of our inference as follows:

That is all there is to our feedforward neural network computation. As you can see, the operations consist almost entirely of products and sums of numbers. Our activation function ReLU(x) can be implemented very easily as well, for example by simply calling max(x,0), such that it returns x whenever the input is greater than 0, but otherwise it returns 0. Note that step(x) can be computed similarly. Many more complicated activation functions exist, such as the sigmoidal function or the hyperbolic tangent, involving different internal computations and better-suited for different purposes. Another thing you can already begin noticing is that we also can run the three hidden units’ computations, and their ReLU applications, in parallel, as their values are not needed at the same time up until we calculate their weighted sum at the output node.

But we don’t have to stop there. Above, you can see the same computation, but this time represented with matrix and vector multiplication operations instead. To arrive at this representation, we “augment” our input vector by adding a 1.0 to it (lighter hue), such that when we put our weights and our bias (lighter hue) in the matrix as shown above, the resulting multiplication yields the same hidden unit outputs. Then, we can apply ReLU on the output vector, element-wise, and then “augment” the ReLU output to multiply it by the weights and bias of our output layer. This representation greatly simplifies notation, as the parameters (weights and biases) of an entire hidden layer can be tucked under a single variable. But most importantly for us, it makes it clear that the inner computations of the network are essentially matrix and vector multiplication or dot products. Given how the size of these vectors and matrices scale with the dimensionality of our inputs and the number of parameters in our network, most runtime will be spent doing these sorts of calculations. A bunch of linear algebra!

Our toy example is, of course, very limited in scope. In practice, modern deep learning models can have tens if not hundreds of hidden layers, and millions of associated parameters. Instead of our two-dimensional vector input example, they can take in vectors with thousands of entries, in a variety of shapes, such as matrices (like single-channel images) or tensors (three-channel RGB images). There is also nothing stopping our matrix representation from taking in multiple inputs vectors at once, by adding rows to our original input. Neural networks can also be “wired” differently than our feedforward neural network, or execute different activation functions. There is a vast zoo of network architectures and techniques, but in the end, they mostly break down to the same parallel arithmetic operations we find in our toy example, just at a much larger scale.

Visual example of convolution layers operating on a tensor. (Image credit: Towards Data Science)

For example, the popular convolutional neural networks (CNNs) that you likely have read about are not “fully-connected” like our mock network. The “weights” or parameters of its hidden convolutional layers can be thought of as a sort of filter, a sliding window applied sequentially to small patches of an input as shown above — this “convolution” is really just a sliding dot product! This procedure results in what’s often called a feature map. Pooling layers reduce the size of an input or a convolutional layer’s output, by computing the maximum or average value of small patches of the image. The rest of the network usually consists of fully-connected layers, like the ones in our example, and activation functions like ReLU. This is often used for feature extraction in images where early convolutional layers’ feature maps can “detect” patterns such as lines or edges, and later layers can detect more complicated features such as faces or complex shapes.

All of what’s been said is strictly limited to inference, or evaluating a neural network after its parameters have been found through training which is a much more complicated procedure. And again, we’ve excluded a lot of explanations. In reality, each of the network’s components is included for a purpose. For example, those of you who have studied linear algebra can readily observe that without the non-linear activation functions, our network simplifies to a linear model with very limited predictive capacity.

An Upgraded AI Engine on the Snapdragon 865 – A Summary of Improvements

With this handy understanding of the components of a neural network and their mathematical operations, we can begin to understand exactly why hardware acceleration is so important. In the last section, we can observe that parallelization is vital to speeding up the network given it allows us, for example, to compute several parallel dot-products corresponding to each neuron activation. Each of these dot-products is itself constituted of multiply-add operations on numbers, usually with 8-bit precision in the case of mobile applications, that must happen as quickly as possible. The AI Engine offers various components to offload these tasks depending on the performance and power efficiency considerations of the developer.

A diagram of a CNN for the popular MNIST dataset, shown on stage at this year’s Snapdragon Summit. The vector processing unit is a good fit for the fully-connected layers, like in our mock example. Meanwhile, the tensor processor handles the convolutional and pooling layers that process multiple sliding kernels in parallel, like in the diagram above, and each convolutional layer might output many separate feature maps.

First, let’s look at the GPU, which we usually speak about in the context of 3D games. The consumer market for video games has stimulated development in graphics processing hardware for decades, but why are GPUs so important for neural networks? For starters, they chew through massive lists of 3D coordinates of polygon vertices at once to keep track of an in-game world state. The GPU must also perform gigantic matrix multiplication operations to convert (or map) these 3D coordinates onto 2D planar, on-screen coordinates, and also handle the color information of pixels in parallel. To top it all off, they offer high memory bandwidth to handle the massive memory buffers for the texture bitmaps overlaid onto the in-game geometry. Its advantages in parallelization, memory bandwidth, and resulting linear algebra capabilities match the performance requirements of neural networks.

The Adreno GPU line thus has a big role to play in the Qualcomm AI Engine, and on stage, Qualcomm stated that this updated component in the Snapdragon 865 enables twice as much floating-point capabilities and twice the number of TOPS compared to the previous generation, which is surprising given that they only posted a 25% performance uplift for graphics rendering. Still, for this release, the company boasts a 50% increase in the number of arithmetic logic units (ALUs), though as per usual, they have not disclosed their GPU frequencies. Qualcomm also listed mixed-precision instructions, which is just what it sounds like: different numerical precision across operations in a single computational method.

Adreno 650 GPU in the Qualcomm Snapdragon 865

The Hexagon 698 DSP is where we see a huge chunk of the performance gains offered by the Snapdragon 865. This year, the company has not communicated improvements in their DSP’s vector eXtensions (whose performance quadrupled in last year’s 855), nor their scalar units. However, they do note that for this block’s Tensor Accelerator, they’ve achieved four times the TOPs compared to the version introduced last year in the Hexagon 695 DSP, while also being able to offer 35% better power efficiency. This is a big deal considering the prevalence of convolutional neural network architectures in modern AI use cases ranging from image object detection to automatic speech recognition. As explained above, the convolution operation in these networks produces a 2D array of matrix outputs for each filter, meaning that when stacked together, the output of a convolution layer is a 3D array or tensor.

Qualcomm also promoted their “new and unique” deep learning bandwidth compression technique, which can apparently compress data losslessly by around 50%, in turn moving half the data and freeing up bandwidth for other parts of the chipset. It should also save power by reducing that data throughput, though we weren’t given any figures and there ought to be a small power cost to compressing the data as well.

On the subject of bandwidth, the Snapdragon 865 supports LPDDR5 memory, which will also benefit AI performance as it will increase the speed at which resources and input data are transferred. Beyond hardware, Qualcomm’s new AI Model Efficiency Toolkit makes easy model compression and resulting power efficiency savings available to developers. Neural networks often have a large number of “redundant” parameters; for example, they may make hidden layers wider than they need to be. One of the AI Toolkit features discussed on stage is thus model compression, with two of the cited methods being spatial singular value decomposition (SVD) and bayesian compression, both of which effectively prune the neural network by getting rid of redundant nodes and adjusting the model structure as required. The other model compression technique presented on stage relates to quantization, and that involves changing the numerical precision of weight parameters and activation node computations.

The numerical precision of neural network weights refers to whether the numerical values used for computation are stored, transferred, and processed as 64, 32, 16 (half-precision) or 8-bit values. Using lower numerical precision (for example, INT8 versus FP32) reduces overall memory usage and data transfer speeds, allowing for higher bandwidth and faster inferences. A lot of today’s deep learning applications have switched to 8-bit precision models for inference, which might sound surprising: wouldn’t higher numerical accuracy enable more “accurate” predictions in classification or regression tasks? Not necessarily; higher numerical precision, particularly during inference, may be wasted as neural networks are trained to cope with noisy inputs or small disturbances throughout training anyway, and the error on the lower-bit representation of a given (FP) value is uniformly ‘random’ enough. In a sense, the low-precision of the computations is treated by the network as another source of noise, and the predictions remain usable. Heuristic explainers aside, it is likely you will accrue an accuracy penalty when lousily quantizing a model without taking into account some important considerations, which is why a lot of research goes into the subject

Back to the Qualcomm AI Toolkit: Through it they offer data-free quantization, allowing models to be quantized without data or parameter fine-tuning while still achieving near-original model performance on various tasks. Essentially, it adapts weight parameters for quantization and corrects for the bias error introduced when switching to lower precision weights. Given the benefits incurred by quantization, automating the procedure under an API call would simplify model production and deployment, and Qualcomm claims more than four times the performance per watt when running the quantized model.

But again, this isn’t shocking: quantizing models can offer tremendous bandwidth and storage benefits. Converting a model to INT8 not only nets you a 4x reduction in bandwidth, but also the benefit of faster integer computations (depending on the hardware). It is a no-brainer, then, that hardware-accelerated approaches to both the quantization and the numerical computation would yield massive performance gains. On his blog, for example, Google’s Pete Warden wrote that a collaboration between Qualcomm and Tensorflow teams enables 8-bit models to run up to seven times faster on the HVX DSP than on the CPU. It’s hard to overstate the potential of easy-to-use quantization, particularly given how Qualcomm has focused on INT8 performance.

The Snapdragon 865’s ARM-based Kryo CPU is still an important component of the AI engine. Even though the hardware acceleration discussed in the above paragraphs is preferable, sometimes we can’t avoid applications that do not properly take advantage of these blocks, resulting in CPU fallback. In the past, ARM had introduced specific instruction sets aimed at accelerating matrix- and vector-based calculations. In ARMv7 processors, we saw the introduction of ARM NEON, a SIMD architecture extension enabling DSP-like instructions. And with the ARMv8.4-A microarchitecture, we saw the introduction of an instruction specifically for dot-products.

All of these posted performance gains relate to many of the workloads we described in the previous section, but it’s also worth keeping in mind that these Snapdragon 865 upgrades are only the latest improvements in Qualcomm’s AI capabilities. In 2017, we documented their tripling of AI capabilities with the Hexagon 685 DSP and other chipset updates. Last year, they introduced their tensor accelerator, and integrated support for non-linearity functions (like the aforementioned ReLU!) at the hardware level. They also doubled the number of vector accelerators and improved the scalar processing unit’s performance by 20%. Pairing all of this with enhancements on the CPU side, like those faster dot-product operations courtesy of ARM, and the additional ALUs in the GPU, Qualcomm ultimately tripled raw AI capabilities as well.

Practical Gains and Expanded Use-Cases

All of these upgrades have lead to five times the AI capabilities on the Snapdragon 865 compared to just two years ago, but perhaps most importantly, the improvements also came with better performance per milliwatt, a critical metric for mobile devices. At the Snapdragon Summit 2019, Qualcomm gave us a few benchmarks comparing their AI Engine against two competitors on various classification networks. These figures look to be collected using AIMark, a cross-platform benchmarking application, which enables comparisons against Apple’s A-series and Huawei’s HiSilicon processors. Qualcomm claims that these results make use of the entire AI Engine, and we’ll have to wait until more thorough benchmarking to properly disentangle the effect of each component and determine how these tests were conducted. For example, do the results from company B indicate CPU fallback? As far as I’m aware, AIMark currently doesn’t advantage of the Kirin 990’s NPU on our Mate 30 Pro units, for example. But it does support the Snapdragon Neural Processing Engine, so it will certainly take advantage of the Qualcomm AI Engine; given it is internal testing, it’s not explicitly clear whether the benchmark is properly utilizing the right libraries or SDK for its competitors.

It must also be said that Qualcomm is effectively comparing the Snapdragon 865’s AI processing capabilities against previously-announced or released chipsets. It is very likely that its competitors will bring similarly-impactful performance improvements in the next cycle, and if that’s the case, then Qualcomm would only hold the crown for around half a year from the moment Snapdragon 865 devices hit the shelves. That said, these are still indicative of the kind of bumps we can expect from the Snapdragon 865. Qualcomm has generally been very accurate when communicating performance improvements and benchmark results of upcoming releases.

Qualcomm Snapdragon 865 AI performance versus competitors

All of the networks presented in these benchmarks are classifying images from databases like ImageNet, receiving them as inputs and outputting one out of hundreds of categories. Again, they rely on the same kinds of operations we described in the second section, though their architectures are a lot more complicated than these examples and they’ve been regarded as state of the art solutions at their time of publication. In the best of cases, their closest competitor provides less than half the number of inferences per second.

AI power consumption on the Qualcomm Snapdragon 865

In terms of power consumption, Qualcomm offered inferences per watt figures to showcase the amount of AI processing possible in a given amount of power. In the best of cases (MobileNet SSD), the Snapdragon AI Engine can offer double the number of inferences under the same power budget.

Power is particularly important for mobile devices. Think, for example, of a neural network-based Snapchat filter. Realistically, the computer vision pipeline extracting facial information and applying a mask or input transformation only needs to run at a rate of 30 or 60 completions per second to achieve a fluid experience. Increasing raw AI performance would enable you to take higher-resolution inputs and output better looking filters, but it might also simply be preferable to settle for HD resolution for quicker uploads and decrease power consumption and thermal throttling. In many applications, “faster” isn’t necessarily “better”, and one then gets to reap the benefits of improved power efficiency.

Snapdragon acceleration on the Qualcomm Snapdragon 865

During Day 2 of the Snapdragon Summit, Sr. Director of Engineering at Snapchat Yurii Monastyrshyn took the stage to show how their latest deep learning-based filters are greatly accelerated by Hexagon Direct NN using the Hexagon 695 DSP on the Snapdragon 865.

On top of that, as developers get access to easier neural network implementations and more applications begin employing AI techniques, concurrency use cases will take more of a spotlight as the smartphone will have to handle multiple parallel AI pipelines at once (either for a single application processing input signals from various sources or as many applications run separately on-device). While we see respectable power efficiency gains across the compute DSP, GPU, and CPU, the Qualcomm Sensing Hub handles always-on use cases to listen for trigger words at very low power consumption. It enables monitoring audio, video and sensor feeds at under 1mA of current, allowing the device to spot particular sound cues (like a baby crying), on top of the familiar digital assistant keywords. On that note, the Snapdragon 865 enables detecting not just the keyword but also who is speaking it, to identify an authorized user and act accordingly.

More AI on Edge Devices

These improvements can ultimately translate into tangible benefits for your user-experience. Services that involve translation, object recognition and labeling, usage predictions or item recommendations, natural language understanding, speech parsing and so on will gain the benefit of operating faster and consuming less power. Having a higher compute budget also enables the creation of new use cases and experiences, and moving processes that used to take place in the cloud onto your device. While AI as a term has been used in dubious, deceiving and even erroneous ways in the past (even by OEMs), many of your services you enjoy today ultimately rely on machine learning algorithms in some form or another.

But beyond Qualcomm, other chipset makers have been quickly iterating and improving on this front too. For example, the 990 5G brought a 2+1 NPU core design resulting in up to 2.5 times the performance of the Kirin 980, and twice that of the Apple A12. When the processor was announced, it was shown to offer up to twice the frames (inferences) per second of the Snapdragon 855 at INT8 MobileNet, which is hard to square with the results provided by Qualcomm. The Apple A13 Bionic, on the other hand, reportedly offered to six times faster matrix multiplication over its predecessor and improved its eight-core neural engine design. We will have to wait until we can properly test the Snapdragon 865 on commercial devices against its current and future competitors, but it’s clear that competition in this space never stays still as the three companies have been pouring a ton of resources into bettering their AI performance.

The post How Qualcomm Brought Tremendous Improvements in AI Performance to the Snapdragon 865 appeared first on xda-developers.



from xda-developers https://ift.tt/34rDl55
via IFTTT

[Update 6: Los Angeles] Verizon 5G is Rolling Out to More Cities

Update 6 (12/16/19 @ 10:40 AM ET): Verizon is rolling out 5G coverage in the Los Angeles area.

Update 5 (11/20/19 @ 9:10 AM ET): Verizon finally has detailed 5G coverage maps for every city on its website.

Update 4 (11/19/19 @ 9:25 AM ET): Verizon’s 5G network lights up in Boston, Houston, and Sioux Falls.

Update 3 (10/25/19 @ 12:45 PM ET): Verizon expands its 5G network coverage to Omaha and Dallas.

Update 2 (9/26/19 @ 1:15 PM ET): Verizon launches 5G service in New York City, Boise, and Panama City.

Update 1 (8/22/19 @ 12:15 PM ET): Verizon has announced the 5G rollout in Phoenix and a partnership with Boingo.

While many people are still skeptical about 5G, Verizon continues their rollout plans. Today, the company flipped the switch for four new cities: Atlanta, Detroit, Indianapolis, and Washington DC. Verizon is already selling a couple of 5G devices, but the list of available cities is still relatively small. So the continued expansion is good news.

Verizon’s 5G Ultra Wideband network is mmWave, just like AT&T, but different from Sprint’s sub-6Ghz network. One of the limitations of mmWave is you have to be in very specific locations to get the advertised 5G speeds. For example, read the description for Indianapolis below.

Indianapolis:

In Indianapolis, 5G Ultra Wideband service is initially available in parts of the following neighborhoods, Arsenal Heights, Bates Hendricks, Castleton, Crown Hill, Fountain Square, Grace Tuxedo Park, Hawthorne, Historic Meridian Park, Lockerbie Square, Ransom Place, Renaissance Place, St. Joseph Historic Neighborhood, Upper Canal and Woodruff Place and around such landmarks and public spaces as Garfield Park, and Indiana University School of Medicine.

Even if you have a 5G device and live in these cities, you may not be in the covered areas. These four new cities bring Verizon’s list up to nine, but they are still planning to have 5G in more than 30 cities by 2020. Soon, they will add the Galaxy Note 10 5G to the list of capable devices as well. Whether the market is ready or not, Verizon marches on with 5G.

Washington DC:

In Washington DC, consumers, businesses and government agencies can initially access Verizon’s 5G Ultra Wideband service in areas of Foggy Bottom, Dupont Circle, Cardozo / U Street, Adams Morgan, Columbia Heights, Le Droit Park, Georgetown Waterfront, Judiciary Square, Shaw, Eckington, NOMA, National Mall and the Smithsonian, Gallery Place / Chinatown, Mt. Vernon Square, Downtown, Penn Quarter, Brentwood, Southwest Waterfront, Navy Yard, and nearby Crystal City, VA, as well as around landmarks such as the Ronald Reagan National Airport, United States Botanical Gardens, Hart Senate Building, National Gallery of Art, Lafayette Square, The White House, Freedom Plaza, Farragut Square, George Washington University, Capital One Arena, Union Station, Howard University Hospital, George Washington University Hospital, and Georgetown Waterfront Park.

Atlanta:

In Atlanta, 5G Ultra Wideband service will initially be concentrated in parts of the following neighborhoods: Downtown, Midtown, Tech Square, and around such landmarks as The Fox Theater, Emory University Hospital Midtown, Mercedes-Benz Stadium, Home Depot Backyard, Centennial Olympic Park, Georgia Aquarium, World of Coca Cola, and parts of Renaissance Park.

Detroit:

In Detroit, 5G Ultra Wideband service will initially be concentrated in parts of the following areas: Dearborn, Livonia, and Troy, including areas around the Oakland-Troy Airport.

Source: Verizon


Update 1: Phoenix Launch + Boingo Partnership

Verizon’s 5G coverage is coming to Phoenix, AZ, bringing the list of 5G cities up to 10. The network will go live on August 23rd. Verizon also announced a partnership with Boingo to bring 5G Ultra Wideband service to indoor and public places.

This is important because Verizon’s current 5G network is essentially unusable indoors, a limitation of the technology they are using. The partnership should bring 5G to places like airports, stadiums, arenas, office buildings, hotels, etc.

Last, but not least, the Samsung Galaxy Note 10+ 5G will be available from Verizon tomorrow, August 23rd. The full retail price is $1,299.99.

Source: Verizon


Update 2: NYC, Boise, Panama City

Verizon’s 5G coverage is expanding to 3 more cities: New York City, Boise, and Panama City. In New York City, coverage will be in areas of Manhattan, Brooklyn, the Bronx, and around several landmarks. Verizon’s 5G technology limits coverage to very specific areas, so be sure to check the source below for all the exact locations you can access 5G in these cities.

Source: Verizon


Update 3: Omaha & Dallas

Today, Verizon has expanded 5G coverage to two more cities: Omaha, Nebraska and Dallas, Texas. This brings the number of cities with 5G coverage from Verizon up to 15. As with the previous announcements, the actual coverage areas are extremely specific. So if you live in these cities, be sure to check the link below to find out where you can get 5G speeds.

Source: Verizon


Update 4: Boston, Houston, and Sioux Falls

Verizon has announced that its 5G network is now live in three more cities across the US: Boston, MA, Houston, TX, and Sioux Falls, SD. This brings the total number of cities with Verizon 5G coverage up to 18. Just like the previous 15 cities, 5G is only accessible in these cities in very specific locations due to limitations with Verizon’s network technology. Be sure to visit the link below to see the exact locations where you can use 5G.

Source: Verizon


Update 5: 5G Coverage Map

Verizon has been flipping the switch for 5G in US cities for months, but they’ve never really had detailed coverage maps. You can now visit this page on Verizon’s website and select a city to see the 5G coverage. Maps show where 5G Ultra Wideband is strongest and you can zoom in to see LTE coverage as well. Verizon’s 5G coverage is very specific, so these maps are handy if you’re looking to try it out. The website also lists 10 cities that will get 5G next: Cincinnati, Kansas City, Charlotte, Little Rock, Cleveland, Memphis, Columbus, Salt Lake City, Des Moines, and San Diego.

Source: Verizon


Update 6: Los Angeles

Verizon 5G Ultra Wideband service is now available in areas around Los Angeles. As Verizon’s 5G network is limited to very specific locations, it’s not available city-wide. The exact locations are explained below, but Verizon will also have more detailed coverage maps available for the area on December 20th.

Parts of Downtown, Chinatown, Del Rey, and Venice around landmarks such as: Grand Park, Los Angeles Convention Center, Union Station, LA Live, Staples Center, and Venice Beach Boardwalk.

Source: Verizon

The post [Update 6: Los Angeles] Verizon 5G is Rolling Out to More Cities appeared first on xda-developers.



from xda-developers https://ift.tt/31aLIjV
via IFTTT