Dell Technologies and Meta to Drive Generative AI Innovation with Llama 2 On Premises

Dell Technologies (NYSE: DELL) is collaborating with Meta to make it easy for Dell customers to deploy Meta’s Llama 2 models on premises with Dell’s generative AI (GenAI) portfolio of IT infrastructure, client devices and professional services.

“We are at the beginning of a new era with generative AI transforming how industries operate, innovate and compete,” said Jeff Boudreau, chief AI officer, Dell Technologies. “With the Dell and Meta technology collaboration, we’re making open-source GenAI more accessible to all customers, through detailed implementation guidance paired with the optimal software and hardware infrastructure for deployments of all sizes. Now, customers can more easily deploy secure GenAI models on premises for powerful new approaches and insights.”

Open-source GenAI fuels innovation on premises

The collaboration simplifies the on-premises AI environment by bringing together Dell’s top-selling infrastructure portfolio and the Llama 2 family of AI models. Customers can accelerate their GenAI efforts on premises in a traditional data center or at edge locations. Dell has integrated Meta’s Llama 2 models into its system sizing tools to help guide customers to the right solution to power their Llama 2 based AI efforts.

The Dell Validated Design for Generative AI with Meta’s Llama 2 provides pre-tested and proven Dell infrastructure, software, and services to streamline deployment and management of on-premises projects. With fully documented deployment and configuration guidance, organisations can get their GenAI infrastructure up and running more quickly and operate Llama 2 with more predictability.

With Meta’s Llama 2 and the breadth of the Dell Generative AI Solutions technology and services portfolio, organisations of all sizes have access to more reliable tools to deliver GenAI solutions from desktops to core data centers, edge locations and public clouds.

Additional resources

● Blog: Dell and Meta Collaborate to Drive Generative AI Innovation

● Blog: Deploying Llama 2 on the Dell PowerEdge XE9680 Server

● White Paper: Llama 2: Inferencing on a Single GPU

● Design Guide: Generative AI in the Enterprise – Inferencing

Dell Technologies Growing Generative AI Portfolio Speeds Business Transformations

Dell Technologies (NYSE: DELL) expands its Dell Generative AI Solutions portfolio, helping businesses transform how they work along every step of their generative AI (GenAI) journeys.

“To maximize AI efforts and support workloads across public clouds, on-premises environments and at the edge, companies need a robust data foundation with the right infrastructure, software and services,” said Jeff Boudreau, chief AI officer, Dell Technologies. “That’s what we are building with our expanded validated designs, professional services, modern data lakehouse and the world’s broadest GenAI solutions portfolio.”

Customizing GenAI models to maximize proprietary data

The Dell Validated Design for Generative AI with NVIDIA for Model Customization offers pre-trained models that extract intelligence from data without building models from scratch.

This solution provides best practices for customizing and fine-tuning GenAI models based on desired outcomes while keeping information secure and on-premises. With a scalable blueprint for customization, organizations now have multiple ways to tailor GenAI models to accomplish specific tasks with their proprietary data. Its modular and flexible design supports a wide range of computational requirements and use cases, spanning training diffusion, transfer learning and prompt tuning.

Dell Validated Designs for Generative AI now support both model tuning and inferencing, allowing users to more quickly deploy GenAI models with proven infrastructure including the Dell PowerEdge XE9680, the industry’s best performing AI server, or the Dell PowerEdge XE8640, with a choice of NVIDIA® Tensor Core GPUs and NVIDIA AI Enterprise software, which offers frameworks, pre-trained models and development tools, such as the NVIDIA NeMo™ framework, and Dell software. By combining compute power with storage options, such as Dell PowerScale and Dell ObjectScale, customers can rapidly feed models with multiple storage data types with the validated design. The infrastructure is also available as a subscription via Dell APEX.

“We’re implementing Dell PowerEdge XE9680 servers with NVIDIA H100 Tensor Core GPUs into the high-performance computing cluster at Princeton for large language modeling to help drive new levels of discovery,” said Sanjeev Arora, the Charles C. Fitzmorris Professor in Computer Science, Princeton. “This system gives researchers in natural sciences, engineering, social sciences and humanities the opportunity to apply powerful AI models to their work in areas such as visualization, modeling and quantum computing.”

Preparing data, people and processes for GenAI

Dell is applying its process and expertise to help customers generate better, faster business results with expanded GenAI professional services capabilities:

  • Data Preparation Services provide customers with a clean, accurate data set in the right format enabling AI projects to move smoothly while simplifying data integration and delivering quality data output.
  • Dell Implementation Services establish an operational GenAI platform for inferencing and model customization, accelerating time to value. Paired with Dell Managed Services, Dell can operate the full NVIDIA-based GenAI solution, improving operational efficiency and allowing customers to focus on building their proprietary GenAI use cases.
  • Education Services help customers gain the critical skills to close the GenAI capabilities gap.

“Our recent study on Generative AI use in the enterprise made it clear organizations are adamant about being able to use their own data to customize key foundation models, but also need assistance in helping prep their data for that work,” said Bob O’Donnell, president and chief analyst, TECHnalysis Research. “Dell’s latest Generative AI solutions and partnerships offer a broad set of capabilities that help companies capitalize on this potential, bridging knowledge gaps and ensuring data drives discernible, impactful business results.”

Modernizing data infrastructure for AI and analytics

Dell and Starburst are strengthening their relationship to help customers accelerate AI and analytics efforts. This will culminate with an open, modern data lakehouse solution.

The solution will integrate Starburst’s analytics software with Dell’s PowerEdge compute platform, combined with Dell industry-leading storage,2 helping customers extract insights from data wherever it resides. Built with open software principles, customers will gain easy and secure access to multicloud data to get the most value for analytics and AI-driven workflows and deployments.

“Our customers have made it clear they need a robust data platform for accessing distributed data across multicloud environments to drive and operationalize AI efforts,” said Justin Borgman, CEO, Starburst. “By integrating our deep analytics capabilities with Dell’s leading infrastructure and global enterprise services, we can offer customers an open, multicloud data lakehouse solution that quickly and easily makes data available to AI workflows anywhere.”

Availability

  • Dell Validated Design for Generative AI: Model Customization is available globally through traditional channels and Dell APEX starting late October.
  • Dell Professional Services for Generative AI are available in select countries starting late October.
  • The Dell open, modern data lakehouse solution with Starburst has planned global availability in the first half of 2024.

EPAM launches Generative AI orchestration platform, DIAL

EPAM Systems, Inc. , a leading digital transformation services and product engineering company, today announced the launch of its AI-powered DIAL Orchestration Platform (Deterministic Integrator of Applications and LLMs), which merges the power of Large Language Models (LLMs) with deterministic code — offering a secure, scalable and customisable AI workbench to streamline and enhance AI-driven business solutions.

Produced by EPAM’s Reliable AI Lab (RAIL), DIAL helps enterprises speed their experimentation and innovation efforts across an extensive range of LLMs, AI-native Applications and Custom Add-ons as well as provides a practical approach for engineering business solutions with reliable AI capabilities.

The DIAL Platform offers a unified user interface, empowering businesses to leverage a spectrum of public and proprietary LLMs, Add-ons, APIs, Datastores and Business Applications. This integration promotes the development of novel enterprise assets that co-exist seamlessly with an organisation’s existing workflows.

Moreover, Applications and Add-ons can be implemented through diverse approaches, encompassing LangChain, LLamaIndex, Semantic Kernel or custom code — all within an integrated, secure and scalable framework. The DIAL Platform aggregates multi-cloud assets libraries including components, routing, rate-limiting software, monitoring tools, load-balancing solutions and deployment scripts. This extensive, curated toolkit supports a wide range of business use cases and integration scenarios and offers approaches to significantly optimise the consumption of external LLMs.

“Since 2021, we have experimented with thousands of AI use cases, helping EPAM and our customers to identify and adopt Generative AI scenarios to power business transformation initiatives,” said Elaina Shekhter, Chief Marketing & Strategy Officer at EPAM. “The learnings from these efforts have led us to design and engineer a platform that accelerates the cost-effective development, reliability testing and operations of AI-embedded business applications.”

“RAIL, as part of EPAM’s broader R&D initiatives, concentrates on creating accelerators and components that encompass a full scope of advanced technologies, including Generative AI. These are designed to expedite the implementation of enterprise-scale solutions, embodying our commitment to innovation and our vision for a transformative future, driven by cutting-edge technology,” – added Ilya Gorelik, Head of EPAM’s RAIL Lab.

In keeping with EPAM’s long-standing commitment to Open Source, portions of DIAL will be released under an Apache 2.0 licensing scheme as part of its launch. This initiative encourages responsible use, community innovation and the adoption of responsible AI enterprise standards within the industry.

To learn more about EPAM’s AI-powered DIAL Orchestration Platform, visit https://epam-rail.com.

Dell Technologies Expands AI Offerings to Accelerate Secure Generative AI Initiatives

Dell Technologies (NYSE: DELL) introduces new offerings to help customers quickly and securely build generative AI (GenAI) models on-premises to accelerate improved outcomes and drive new levels of intelligence.

New Dell Generative AI Solutions, expanding upon our May’s Project Helix announcement, span IT infrastructure, PCs and professional services to simplify the adoption of full-stack GenAI with large language models (LLM), meeting organizations wherever they are in their GenAI journey. These solutions help organizations, of all sizes and across industries, securely transform and deliver better outcomes.

“Generative AI represents an inflection point that is driving fundamental change in the pace of innovation while improving the customer experience and enabling new ways to work,” Jeff Clarke, vice chairman and co-chief operating officer, Dell Technologies, said on a recent investor call. “Customers, big and small, are using their own data and business context to train, fine-tune and inference on Dell infrastructure solutions to incorporate advanced AI into their core business processes effectively and efficiently.”

“Generative AI can help every enterprise transform its data into intelligent applications that enable them to solve complex business challenges,” said Manuvir Das, vice president, Enterprise Computing, NVIDIA. “Dell Technologies and NVIDIA are building on our long-standing relationship to enable organizations to harness this capability to better serve their customers, more fully support their employees and fuel innovation across their operations.”

With Dell Generative AI Solutions, the breadth of Dell’s portfolio, including Dell Precision workstationsDell PowerEdge serversDell PowerScale scale-out storage, Dell ECS enterprise object storage and a broad set of services, provide the reliable tools to deliver GenAI solutions from desktops to core data centers, edge locations and public clouds.

CyberAgent, a major Japanese digital advertising company, selected Dell servers as the key IT infrastructure for its generative AI development and digital advertising.

“We decided to select Dell PowerEdge XE9680 servers equipped with NVIDIA H100 GPUs, which are optimized for generative AI applications,” said Daisuke Takahashi, solution architect of CIU, CyberAgent. “In addition, we value the ease of use of the Dell iDRAC management tool for secure local and remote server management.”

Full-stack GenAI for enterprises

The Dell Validated Design for Generative AI with NVIDIA is an inferencing blueprint, jointly engineered with NVIDIA, optimized to speed the deployment of a modular, secure and scalable platform for GenAI in the enterprise.

Until now, traditional inferencing approaches have been challenged to scale and support LLMs for real-time results and ensure data can be easily used by AI infrastructure. This solution helps customers generate higher quality, faster time-to-value predictions and decisions with their own data.

With a comprehensive verified inferencing approach, organizations can rapidly deploy GenAI projects and scale applications to transform processes in key areas, such as customer operations, content creation and management, software development and sales.

Dell Validated Designs are pre-tested, proven configurations to power GenAI inferencing efforts with Dell infrastructure, such as the Dell PowerEdge XE9680 or PowerEdge R760xa, with a choice of NVIDIA® Tensor Core GPUs, NVIDIA AI Enterprise software, the NVIDIA NeMo™ end-to-end framework and Dell software at its core. Customers can combine this with resilient and scalable unstructured data storage, including Dell PowerScale and Dell ECS storage. The infrastructure is available via Dell APEX, offering customers an on-premises deployment with a cloud consumption and management experience.

Services drive faster, more holistic GenAI outcomes

Dell Professional Services deliver a broad spectrum of new capabilities to help customers accelerate GenAI adoption to improve their operational efficiency and advance innovation.

These services begin with creating a new GenAI strategy that identifies high value use cases and a roadmap to achieve them. Dell also offers full-stack implementation services, based on the Dell Validated Design for GenAI with NVIDIA, and adoption services that apply the platform to specific use cases, such as customer operations or content creation. Once integrated into the business, Dell’s scaling services help improve operations through managed services, training or resident experts.

“Dell’s AI solutions offers enterprises the potential to right-size their GenAI efforts and help streamline operations as customers look to quickly deliver products and services across industry-specific use cases,” said Ashish Nadkarni, group vice president and general manager, worldwide infrastructure and BuyerView research, IDC. “In the era of intelligent automation, Dell Technologies is meeting organizations wherever they are in their GenAI journey, helping them position themselves for success in an increasingly intelligent and technology-driven world.”

Precision workstations provide secure GenAI development locally on the device

As the global leader in workstations,1 Dell Precision workstations allow AI developers and data scientists to develop and fine-tune GenAI models locally before deploying at scalePrecision workstations provide the performance and reliability – with up to four NVIDIA RTX 6000 Ada Generation GPUs in a single workstation – to run AI software frameworks 80% faster than the previous generation.2

Built-in AI software, Dell Optimizer, learns and responds to the way people work, improving performance across applications, network connectivity and audio. The latest feature allows mobile workstation users leveraging GenAI models to improve performance for the application in-use while minimizing impact to battery runtime.

“Our customers are looking to use generative AI in every aspect of their business, from monitoring agent behavior to detecting fraud,” said James Laird, chief operating officer, Intelligent Voice. “Recent advances in AI combined with the power of Dell’s AI solutions allows us to quickly build, test and deploy high-quality models at the speed our customers require.”

Availability

  • Dell Validated Design for Generative AI with NVIDIA is available globally through traditional channels and Dell APEX today.
  • Dell Professional Services for Generative AI are available in select countries now.
  • Dell Precision workstations (7960 Tower, 7865 Tower, 5860 Tower) with NVIDIA RTX 6000 Ada Generation GPUs will be available globally in early August.
  • Dell Optimizer adaptive workload will be available globally on select Precision mobile workstations on August 30.

Salesforce Announces AI Cloud – Bringing Trusted Generative AI to the Enterprise

Salesforce today announced AI Cloud, the fastest and most trusted way for Salesforce customers to supercharge their customer experiences and company productivity with generative AI for the enterprise. AI Cloud is a suite of capabilities optimized for delivering trusted, open, and real-time generative experiences across all applications and workflows. AI Cloud’s new Einstein GPT Trust Layer resolves concerns of risks associated with adopting generative AI by enabling customers to meet their enterprise data security and compliance demands, while offering customers the benefits of generative AI. This unique blend of capabilities and security solidifies Salesforce’s position as the #1 AI CRM.

At the heart of AI Cloud is Einstein, the world’s first AI for CRM, which now powers over 1 trillion predictions per week across Salesforce’s applications. With generative AI, Einstein helps make every company and employee more productive and efficient across sales, service, marketing, and commerce.

AI Cloud will enable sales reps to quickly auto-generate personalized emails tailored to their customer’s needs, and service teams to auto-generate personalized agent chat replies and case summaries. Marketers can auto-generate personalized content to engage customers and prospects across email, mobile, web, and advertising. Commerce teams can auto-generate insights and recommendations to deliver customized commerce experiences at every step of the buyer’s journey. And, developers can auto-generate code, predict potential bugs in code, and suggest fixes.

Why trusted generative AI matters in the enterprise: Company leaders want to embrace generative AI, but are wary of the risks – hallucinations, toxicity, privacy, bias, and data governance concerns are creating a trust gap. New Salesforce research found that 73% of employees believe generative AI introduces new security risks and nearly 60% of those who plan to use the technology don’t know how to keep data secure.

AI Cloud will help fill that trust gap with the new Einstein GPT Trust Layer. The Einstein GPT Trust Layer will help prevent large-language models (LLMs) from retaining sensitive customer data. This separation of sensitive data from the LLM will help customers maintain data governance controls while still leveraging the immense potential of generative AI. The Einstein GPT Trust Layer sets a new industry standard for secure generative AI for the enterprise.

“AI is reshaping our world and transforming business in ways we never imagined, and every company needs to become AI-first,” said Marc Benioff, Chair and CEO, Salesforce. “AI Cloud, built on the #1 CRM, is the fastest and easiest way for our customers to unleash the incredible power of AI, with trust at the center driven by our new Einstein GPT Trust Layer. AI Cloud will unlock incredible innovation, productivity, and efficiency for every company.”

AI Cloud will integrate Salesforce technologies, including EinsteinData CloudTableauFlow, and MuleSoft to provide trusted, open generative AI that is enterprise ready.

Trusted and Open: The Einstein GPT Trust Layer will enable companies to get started with trusted generative AI faster by optimizing the right model for the right task. It will also provide deployment capabilities for any relevant LLM while helping companies maintain their data privacy, security, residency, and compliance goals.

  • Use of Third-Party LLMs: As part of Salesforce’s commitment to an open ecosystem, AI Cloud is designed to host LLMs from Amazon Web Services (AWS), Anthropic, Cohere, and others — entirely within Salesforce’s infrastructure. AI Cloud will help maintain customer prompts and responses in the Salesforce infrastructure. In addition, Salesforce and OpenAI have established a shared trust partnership to deliver joint content moderation using OpenAI’s leading Enterprise API and best-in-class safety tools in conjunction with the Einstein GPT Trust Layer to help keep data retained in Salesforce.
  • Use of Salesforce LLMs: AI Cloud will enable customers to use Salesforce LLMs developed by Salesforce AI Research to power advanced capabilities such as code generation and business process automation assistance, fundamentally transforming how businesses interact with their CRM software. Salesforce’s LLMs – including CodeGen, CodeT5+, and CodeTF – help companies increase productivity, bridge the talent gap, reduce the cost of implementations, and better detect incidents.
  • Bring Your Own Model (BYOM): Customers who have trained their own domain-specific models outside of Salesforce will benefit from AI Cloud while storing data on their own infrastructure. These models, whether running through Amazon SageMaker or Google’s Vertex AI, will connect directly to AI Cloud through the Einstein GPT Trust Layer. In this scenario, customer data can remain within the customers’ trust boundaries.

Business Ready: AI will fuel more than $15 trillion in global economic growth and boost GDP by 26% by 2030*. AI Cloud will harness the full power of Salesforce, helping make companies and employees more productive and efficient.

  • Generative AI across every Salesforce application: Salesforce is bringing trusted generative AI to every product with Sales GPT, Service GPT, Marketing GPT, Commerce GPT, Slack GPT, Tableau GPT, Flow GPT, and Apex GPT.  See more about Salesforce’s trusted, open, and business-ready generative AI-powered applications here.
  • Prompt template and builders: The prompts used to generate AI content directly influence the quality and relevance of the generated content. Salesforce is developing optimized AI prompts that use harmonized data to ground generated outputs in every company’s unique context. These context-rich prompts will help sales, service, marketing, commerce, and IT teams get instant value from trusted generative AI, without hallucinations, while reducing time and cost.

The customer perspective:

“Our goal is to deliver more personalized member engagement, make our processes more efficient and cost-effective, and drive innovation across our team within a safe and trusted environment,” said Shohreh Abedi, EVP, Chief Operations Technology Officer, and Member Experience at AAA – The Auto Club Group. “We’re accelerating our digital transformation with Salesforce, and AI Cloud will help us implement AI across our entire business, including devops, support, sales, and underwriting.”

“Embedding AI into our CRM has delivered huge operational efficiencies for our advisors and clients,” said Greg Beltzer, Head of Tech for RBC US Wealth Management. “We believe that this technology has the potential to transform the way businesses interact with their customers, deliver personalized experiences, and drive customer loyalty. We are excited to explore this opportunity with Salesforce and drive the next generation of personalized customer experiences.”

Salesforce and Accenture’s Acceleration Hub for Generative AI

Salesforce recently announced plans to collaborate with Accenture to accelerate the deployment of generative AI for CRM. Together, the companies intend to establish an acceleration hub for generative AI that provides organizations with the technology and experience they need to scale generative AI for CRM — helping to increase employee productivity and transform customer experiences.

Pricing and Availability

Specific feature availability details as follows:

  • The Einstein GPT Trust Layer will be generally available in June 2023.
  • Service GPT is in pilot today, and will be generally available in June 2023.
  • Sales GPT is in pilot today, and will be generally available in July 2023.
  • Marketing GPT will be in pilot in June 2023, and generally available February 2024.
  • Commerce GPT is in pilot today, and will be generally available in July 2023.
  • Apex GPT will be in pilot in June 2023.
  • Flow GPT will be in pilot in October 2023.
  • Slack GPT is in beta today, and will be generally available later this year.
  • Tableau GPT will be in pilot in November 2023.

New generative AI-powered Zoom IQ features available

Today Zoom Video Communications, Inc. (NASDAQ: ZM) launched key features of Zoom IQ, a smart companion that empowers collaboration and unlocks people’s potential through generative AI. Now available through free trials for customers in select plans,[1] the Zoom Meeting summary and Zoom Team Chat compose features will help teams improve productivity, balance workday priorities, and collaborate more effectively.

“With the introduction of these new capabilities in Zoom IQ, an incredible generative AI assistant, teams can further enhance their productivity for everyday tasks, freeing up more time for creative work and expanding collaboration,” said Smita Hashim, chief product officer at Zoom. “There is no one-size-fits-all approach to large language models, and with Zoom’s federated approach to AI, we are able to bring powerful capabilities to our customers and users through Zoom’s own models as well as our partners’ models.”

Zoom’s federated approach to AI leverages its own proprietary large language AI models, those from leading AI companies — such as OpenAI and Anthropic — and select customers’ own models. With this flexibility to incorporate multiple types of models, Zoom’s goal is to provide the most value for its customers’ diverse needs.

The first set of Zoom IQ capabilities is now generally available to Zoom customers in select plans as free trials:

  • Meeting summary: Zoom Meeting hosts can now create a summary powered by Zoom’s own large language models and share it via Zoom Team Chat and email without recording the conversation. Hosts receive automated summaries and can share them with attendees and those who didn’t attend to improve team collaboration and speed up productivity.
  • Chat compose: Zoom Team Chat users can now use the generative AI-powered compose feature, which leverages OpenAI’s technology, to draft messages based on the context of a Team Chat thread in addition to changing message tone and length as well as rephrasing responses to customise text recommendations.

Zoom is committed to empowering customers with the tools they need to control their data. In order to use these features, customers will need to go to the Zoom admin console and opt into the free trials for each feature. As part of the opt-in, customers will also select data-sharing options with Zoom. Account admins may change this data-sharing selection at any time. Customer data will not be used to train third-party models. More information can be found here.

To further help our customers and users, Zoom will continue to enhance its products with Zoom IQ capabilities. The next set of generative AI-powered features, scheduled to be released soon, will allow users to draft email content, summarise Team Chat threads, organise ideas, and draft whiteboard content:

  • Email compose: Harnessing the power of generative AI, users will get email draft suggestions in response to the conversational context from prior Zoom Meetings, Zoom Phone calls, and email threads. Initially available in Zoom IQ for Sales, sales professionals can now quickly follow up with customers based on the context of their last conversation. Email compose will be generally available in the coming weeks.

  • Zoom Team Chat thread summaries: Ever step away from the computer only to come back to a flurry of Team Chat messages? Available in the coming months, Team Chat thread summaries will allow users to catch up with the click of a button.

  • Meeting queries: Joining a Zoom Meeting late can be both disruptive and confusing for the latecomer, but not anymore. Meeting queries will allow users to catch up quickly without disrupting the meeting flow by discreetly submitting a query via the in-meeting chat and receiving a generative AI-created summary of what they missed. The meeting queries feature is expected to be generally available in the coming months.

  • Whiteboard draft: Who hasn’t experienced the “cold start” problem? A slow start to a brainstorming session can put a damper on idea generation, but with whiteboard draft, teams will be able to get a set of initial ideas, simply using text prompts. The whiteboard draft feature is expected to be generally available in the coming months.

  • Whiteboard synthesise: Brainstorming sessions typically end with a lot of ideas that need to be organised in order to execute. The whiteboard synthesise feature automatically organises ideas into categories, so teams can get to work faster. This feature is expected to be generally available in the coming months.

TCS Announces Generative AI Partnership with Google Cloud and New Offering for Enterprise Customers

Tata Consultancy Services (TCS) has announced an expanded partnership with Google Cloud and the launch of its new offering, TCS Generative AI which leverages Google Cloud’s generative AI services, to design and deploy custom-tailored business solutions that help clients harness the power of this exciting new technology to accelerate their growth and transformation.

Building on its deep domain knowledge across multiple industry verticals and investments in research and innovation, TCS has developed a large portfolio of AI-powered solutions and intellectual property in the areas of AIOps, Algo Retail™, smart manufacturing, digital twins and robotics. The company is currently working with clients in multiple industries, to explore how generative AI can be used to deliver value in their specific business contexts.

This new offering is powered by Google Cloud’s Generative AI tools – Vertex AI, Generative AI Application Builder and Model Garden, and TCS’ own solutions. TCS will use its client-specific contextual knowledge, proven design thinking and agile development processes to ideate solutions jointly with clients, rapidly prototype the most promising ideas and build full-fledged transformation solutions with enhanced time to value.

These collaborative exercises will utilize TCS Pace Ports™, the company’s co-innovation hubs located in New York, Pittsburg, Toronto, Amsterdam and Tokyo, where clients can also engage with academic researchers and start-up partners from TCS’ extended innovation ecosystem.

TCS has been investing in scaling its expertise in rapidly evolving cloud technologies. It has over 25,000 engineers certified on Google Cloud. In addition, TCS has over 50,000 associates trained in AI, with plans to earn 40,000 skill badges on Google Cloud Generative AI within the year, to support the anticipated demand for its new offering.

With deep contextual knowledge of our customers’ businesses, we are well positioned to build innovative enterprise-level solutions using generative AI. Our launch partnership with Google Cloud on generative AI enables us to rapidly create value for our customers. TCS is investing in assets, frameworks, and talent to harness the power of generative AI to enable growth and transformation for our customers,” said Krishnan Ramanujam, President, Enterprise Growth Group, TCS.

TCS’ expertise in business transformation and its commitment to train thousands of people on Google Cloud Generative AI will be important assets for businesses accelerating their generative AI adoption,” said Kevin Ichhpurani, Vice President, Global Partner Ecosystems and Channels, Google Cloud, “TCS and Google Cloud will help address industry-specific challenges and opportunities with generative AI capabilities and solutions, with a focus on addressing real-world use cases and adding business value.”

The TCS Google Business Unit offers customers a full catalog of services and solutions, leveraging TCS’ contextual knowledge, industry-focused innovation, and Google Cloud’s platform capabilities. Our offerings include advisory, foundational cloud-build and security services, applications and data modernization, AI build and deployment services, a managed services model for hybrid and multi-cloud environments, and a fit-to-purpose digital solutions across industries.

TCS provides cloud-native services and solutions across new technologies such as generative AI, intelligent edge-to-core, and blockchain to enhance end-customer value. TCS has achieved 25 specializations and has received several Google Cloud awards for its comprehensive solutions, 2021 Industry Solution Partner of the Year for Retail; 2021 Global Diversity & Inclusion Partner of the Year; and 2020 Breakthrough Partner of the Year.

To learn more about the TCS Google Business Unit, visit:  https://www.tcs.com/what-we-do/services/cloud/google