As part of the Pixel phone’s powertrain, Google built a customized system on a chip (SoC) called Tensor. “So excited to introduce the new Google Tensor chip, which is 4 years in the making (for scale).” As Pixel’s biggest innovation to date, Tensor is built on our decades of computing experience. Sundar Pichai, CEO of Google, tweeted that the Pixel 6 and Pixel 6 Pro will be available in the fall.
The club of in-house SoCs that previously included Apple (M1), Huawei (Kirin), and Samsung (Exynos) has now expanded to include Google. Tensor has several advantages, including:
A faster processor: Images are captured using computational photography and machine learning (e.g., Google’s Night Sight). A powerful speech recognition model was also released by the tech giant. To offer the best performance, the features require a high level of computational power and very low latency. Google Pixel smartphones can be enhanced with complex artificial intelligence innovations.
Bringing in new AI features: Due to Tensor chips, Google can introduce new features based on Machine Learning without having to worry about performance. The power of a powerful processor is crucial to the performance of heavy AI workloads.
More layers to hardware security: Titan M2, along with Tensor’s new security core, will add an additional layer of security. It is a custom chip developed by Google for protecting sensitive information like passwords, enabling encryption, and securing app transactions.
Google’s Pixel users are now able to personalize their devices with features such as notification shades, volume controls, and lock screens thanks to the recent launch of Android 12 beta. New features in Android 12 give users more control over where their private information can be accessed and provide more transparency about which apps are accessing what data.
For a phone to perform and stay charged, it must have a powerful processor. Google has been unable to take a serious bite out of the smartphone market despite owning Android OS. The company aims to revitalize the smartphone segment with the new Tensor chips. Artificial intelligence and machine learning have been at the forefront of Google’s innovations in recent years.
Google LaMDA: Among current language models like BERT and GPT-3, Google’s Language Model for Dialogue Application is based on Transformer, an open-source neural network architecture from Google Research. For now, the system is trained on text, however future applications include Google Maps, Conversational AI, etc.
AI in Google Maps: Users can now choose between Eco-Friendly and Safer Routing, based on real-time weather and traffic information.
The Vertex AI platform: For deployment and maintenance of AI models, it is managed machine learning platform. By using pre-trained and custom tooling within a unified AI platform, users can design, deploy, and scale machine learning models more quickly. The framework is also compatible with other open source frameworks, such as TensorFlow, Sci-kit Learn, and PyTorch.
Pattern Recognition: With Google Photos, the tech giant have introduced an algorithm that compares photos for visual and conceptual similarity. The algorithm uses machine learning to convert images into numbers.
MUM: MUM stands for Multitask Unified Model and is based on Transformer architecture with 75 different languages learned to train it. MUM has the ability to understand information across text and images that can expand to audio and video in the future.