Slack strengthens its AI-powered platform as it continues rolling out native generative AI capabilities to users, including enhanced search, channel and thread summaries, and a new recap feature

Customers like Wayfair, Beyond Better Foods, and more depend on Slack AI to stay ahead of the workday and better surface and prioritize information


Salesforce today announced that Slack AI — which uses a company’s conversational data to help users work faster and smarter — is now available to all paid Slack customers with expanded language support. Now, businesses of all sizes can access a trusted and intuitive generative AI experience built natively on the secure platform where their work already happens.

Slack AI now includes:

Why it matters: Customers are already saving an average of 97 minutes per user each week using Slack AI to find answers, distill knowledge, and spark ideas, according to an internal analysis. Yet, while 94% of executives say that incorporating AI into their organization is an urgent priority, only 1 in 4 desk workers report that they have tried AI tools at work, according to the latest research by the Workforce Lab from Slack. 

Customer deep dive: From large enterprises to small businesses, customers use Slack AI to prioritize exactly what they need to know, when they need to know it.

Wayfair replaced its messaging software with Slack in 2016, and now uses Slack AI to help distill its collected data more efficiently. The global retailer uses enhanced search to ask questions in natural language and find concise answers in relevant channels, without having to sort through lengthy messages.

Slack AI gets people accurate information faster, from any channel.

– Taylor Keck, Senior Engineer, Enterprise Solutions, Wayfair

Beyond Better Foods uses Slack as its primary communication platform. As a healthy dessert brand, their operations team uses Slack AI’s enhanced search capabilities to fast-track answers for logistics planning and recaps to keep track of select channels, saving time and keeping them focused.

HR provider ProService Hawaii employees use Slack AI conversation summaries to stay informed and catch up after attending multiple back-to-back meetings.

Being able to pick the day, week, or month I’d like to catch up on with Slack AI has been so impactful.

– Jason Morita, Product Owner, ProService Hawaii

What’s next: In the future, Slack AI’s search and summarization capabilities will tap into new data sources – including files, Slack apps, canvases, and clips – to enhance the breadth and depth of context that Slack AI can access. For example, Slack AI will help users get more value out of huddles, Slack’s feature for lightweight audio or video calls. Slack AI will deliver a summary of key takeaways and action items, making it easy to turn live discussions into next steps.

Slack will also become the best place for users to engage with assistants. This includes an integration with Einstein Copilot, a conversational AI assistant for Salesforce CRM. Users will be able to bring AI-powered CRM insights directly into Slack so they can talk to Salesforce data as easily as they talk to their teams.

Trust and security: Slack AI runs on Slack’s infrastructure and upholds the same security practices and compliance standards that customers expect from Salesforce. Slack AI’s large language models (LLMs) are hosted in Slack’s own virtual private cloud (VPC), ensuring customer data remains in-house and exclusively for that organization’s use. Customer data will not be used to serve other clients, directly or indirectly, and Slack AI does not use customer data for LLM training purposes.

Pricing and availability: 

Learn more:

Forward-looking Statement: The above is intended for informational purposes. Please do not rely on this information in making your purchasing decisions. The development, release, and timing of any products, features or functionality not currently available remain at the sole discretion of Slack and are subject to change.

A media feeding frenzy over the exploding scale and increasing costs of artificial intelligence (AI) has created a divide between the reality of the technology and its perception among decision makers, explained Silvio Savarese, Chief Scientist of Salesforce AI Research in a recent interview.

Whether these concerns come from worries about the bottom line, the impact of AI on the environment, or even basic questions of fairness and access, Savarese believes that changing these misconceptions requires an understanding of when scale is necessary to deliver high-quality AI outputs — and when it isn’t.

In the interview, Savarese also shared why the environmental and financial price tag of AI doesn’t have to be as head-spinning as the headlines might suggest, and how understanding the different scales and performance of AI models can help any business responsibly harness this transformative technology to boost productivity, build deeper relationships with customers, and enhance daily workflows and processes.

Q. Today’s well-known Large Language Models (LLMs) are getting a lot of negative attention for the compute power it requires to run them — both in terms of cost to operate as well as environmental impact. Are models this large necessary for businesses to tap into the power of generative AI? 

Rather than asking if the scale of today’s LLMs is necessary, let’s ask what it’s necessary for.

The scale and size of an AI deployment isn’t inherently advantageous. Rather, when implementing AI, there’s a range of possibilities and trade-offs that should be explored. Remember, the ChatGPTs of the world are designed to do more or less everything, and that makes them very different from most enterprise applications. They can help with homework, suggest holiday recipes, and even reimagine La bohème’s libretto as Socratic dialogues. It’s a great party trick‌ — ‌albeit an expensive one‌. Training Open AI’s ChatGPT 4 cost more than $100 million. ‌But that isn’t what enterprises are using AI for.

The scale and size of an AI deployment isn’t inherently advantageous. Rather, when implementing AI, there’s a range of possibilities and trade-offs that should be explored.”

Silvio Savarese, Chief Scientist of Salesforce AI Research

There’s also the environmental impact of these large LLMs. The hypothetical long-term benefits of AI in combating climate change in areas such as monitoring emissions and optimizing the transportation of goods are significant, with the potential to reduce global emissions 5 to 10% by 2030. However, the utilization of LLMs, while groundbreaking in their capabilities, requires enormous computing resources, exacerbating pressing concerns such as the release of greenhouse gasses, the depletion of water resources, and the extraction of raw materials along the supply chain. Given the urgency of the climate crisis and the imperative to combat planet-warming emissions, it’s paramount that the development and implementation of AI technologies doesn’t surpass the capacity of our planet’s resources.

In contrast to LLMs like ChatGPT or Anthropic’s Claude, an AI model like our own CodeGen 2.5, has a limited set of tasks — ‌helping developers write, understand, and debug code faster. Despite its deliberately small scale, its performance is on a par with models literally twice its size, boasting remarkable efficiency without compromising on utility. So even as it helps developers work faster, it also reduces costs, latency, and, crucially, environmental impact compared to larger-scale LLMs. 

Businesses should not be asking whether they need scale, but how they want to apply that scale. Depending on the task, the answer may vary wildly‌ — ‌and bigger is most certainly not always better.

Q. Okay, but large models still outperform smaller ones, right?

Believe it or not, even this isn’t a clear-cut answer. Large models do generally outperform their smaller counterparts when it comes to flexibility. But therein lies the nuance that is so often left out of conversations around LLMs: as tasks become more narrow, more well-defined, and more unique to a specific organization or domain‌ — ‌exactly what enterprise AI is all about‌ — ‌it’s possible to do more with less. 

In other words, most models aren’t meant to be everything to everyone, which frees up enterprises to focus on their needs while saving huge amounts of resources in the process.

Q. Are you saying small models can’t just keep up with larger ones, but actually outperform them?

Not all the time, no. But under the right circumstances, small models really can offer the best of all worlds: reduced cost, lower environmental impact, and improved performance. Small models are often neck-and-neck with large ones when it comes to tasks like knowledge retrieval, technical support, and answering customer questions. 

Small models are often neck-and-neck with large ones when it comes to tasks like knowledge retrieval, technical support, and answering customer questions.”

Silvio Savarese, Chief Scientist of Salesforce AI Research

In fact, with the right strategy, they can even perform better. This includes models from the open-source world, including Salesforce’s own XGen 7B‌. Our model is specifically trained on a sequence of data with suitable length, helping it with tasks like the summarization of large volumes of text and even writing code‌ — ‌and it consistently exceeds the performance of larger models by leveraging better grounding strategies and better embeddings. Additional small-scale models from our AI research org are planned to be released soon and will be powering generative AI capabilities for critical customer use cases.

Q. Lowering costs is great, but transparency is just as vital. Scale doesn’t matter if I can’t trust the output, right?

Scaling down models isn’t just about saving money. It’s one of the best ways to ensure AI outputs are reliable. Large models are exciting, but they often don’t provide much information about the data they use. This leaves companies with no choice but to monitor deployments closely to catch harmful or inaccurate outputs. Needless to say, this falls far short of the standard most businesses expect from their technology.

Scaling down models isn’t just about saving money. It’s one of the best ways to ensure AI outputs are reliable.”

Silvio Savarese, Chief Scientist of Salesforce AI Research

Instead, consider a simple, intuitive fact: smaller models are trained on smaller data sets, which are inherently easier to document and understand‌ — ‌an increasingly important trust and transparency measure as the role of LLMs grows to include mission-critical applications that don’t just require reliability, but accountability as well. 

Additional steps for verifying that generative AI produces trusted results are of course, critical: the Einstein Trust Layer is Salesforce’s guaranteed accountability model assisting businesses in efficiently managing data privacy, security, and transparency. The Einstein Trust Layer serves as a secure middleman for user interactions with LLMs. Its functions include obscuring personally identifiable information (PII), monitoring output for harmful content, guaranteeing data privacy, prohibiting the storage or use of user data for future training, and unifying discrepancies among various model providers. 

Q. What if companies really do need more scale?

There are, of course, times when increasing scale is simply unavoidable, and the power of small models doesn’t negate the potential of bigger ones. But again, let’s ask the right questions: rather than simply asking whether we need scale, let’s ask what you need it for. The answer will inform your strategy from the very first steps, because there are, ultimately, two very different ways to scale: increasing the parameter count of a single model, or orchestration‌ — ‌the connection of multiple models into a single, larger deployment, analogous to multiple human workers coming together as a team.

Orchestration has the potential to offer the power of scale while still keeping its pitfalls in check. After all, even small models can do amazing things when combined with one another, especially when each is geared toward a specific strength that the others might lack: one model to focus on information retrieval, one to focus on user interactions, another to focus on the generation of content and reports, and so on. In fact, smaller models are arguably a more natural choice in such cases, as their specialized focus makes their role in the larger whole easier to define and validate. 

In other words, small models can be combined to solve ever-bigger problems, all while retaining the virtues of their small size‌ — ‌each can still be cleanly trained, tuned, and understood with an ease large models can’t touch. And it’s yet another example of why a simple parameter count can often be misleading.

Q. How can businesses best incorporate LLMs?

LLMs are a hugely complex topic, and there’s room for any number of voices in the conversation. But we’re overdue for a more balanced, strategic perspective on the question of how much we need to get what we want: how much time, how much compute, and, ultimately, how much cost. The answer isn’t anywhere near as simple as the impression one might get from the headlines, and I believe amazing things can be done on just about any budget. It’s just a matter of knowing what’s possible.

Go deeper:

13% of European nonprofit organisations are already using AI, and 22% are ‘optimistic but cautious’ about the technology, citing concerns around data security and privacy, loss of human expertise and job displacement. This according to the latest annual Nonprofit Pulse report from the European Fundraising Association released today, in partnership with the UK’s Chartered Institute of Fundraising and Salesforce.

The survey of 671 senior representatives of nonprofit organisations across 20 European nations explores how nonprofits are responding to economic headwinds. For the first time, it includes a focus on how nonprofits are using AI, or plan to, and their view on its opportunities and the challenges around its use. 

Why it matters: With increased workload, fundraising, and supporting staff and their wellbeing among the biggest challenges facing nonprofit organizations, many nonprofits are responding by seizing the opportunities available to them – from advances in technology and AI to greater collaboration between organisations.

The Salesforce perspective: “AI represents a tremendous opportunity for nonprofits of all sizes, and will be the key to reducing workloads for overburdened staff, improving fundraising outcomes, accelerating mission impact, and so much more. But, successful adoption in the sector depends on the use of trusted AI that can help nonprofits safely take advantage of their data with confidence,” said Lori Freeman, VP & GM of Nonprofits, Salesforce. 

By embracing AI and educating employees on how to use it in a trusted and ethical way, nonprofits have a once-in-a-generation opportunity to modernize their operations and impact.

Lori Freeman, VP & GM of Nonprofits, Salesforce

Learn more: Find out more about the report’s findings here

New Caseworker Narrative Generation helps government employees work cases faster by automating manual tasks with AI

Salesforce now offers several FedRAMP-compliant features for products like Field Service, Privacy Center, Security Center, and GovSlack


Salesforce today announced Public Sector Einstein 1 for Service, including CRM, trusted AI, and data capabilities to help government employees automate administrative tasks and provide faster service to constituents. Built on Salesforce’s Einstein 1 platform, public sector organizations can now quickly and easily generate case reports, capture real-time call transcriptions, and document and format case interactions, all in a single offering. 

Why it matters: BCG estimates that generative AI could unlock a $1.75 trillion productivity opportunity annually across many functions and levels of government. However, 62% of IT decision makers across industries, including those in the public sector, feel their organization’s data systems are not ready to leverage AI.

Innovation in action: Public Sector Einstein 1 for Service offers government contact center agents and case managers trusted conversational and generative AI, enabling them to be more productive and efficient. Features include: 

Caseworker Narrative Generation helps caseworkers create case reports and summaries in natural language. 

High-quality AI requires high-quality data and insights: Public Sector Einstein 1 for Service also includes Data Cloud, which connects and harmonizes data and uses it to power government agency applications. 

Data Cloud for the Public Sector brings in data from different sources to build unified constituent profiles.
Interaction Notes for Public Sector helps caseworkers capture detailed notes of their interactions with constituents or other case participants.

What’s new in compliance: Salesforce also now offers several Federal Risk and Authorization Management Program (FedRAMP) compliant tools to help government agencies drive efficiency and productivity while meeting regulatory requirements. These tools include: 

With Public Sector Einstein 1 for Service, organizations can implement trusted AI to become more efficient, better manage and harmonize their data, and give employees the tools they need to better serve their constituents, all while driving their mission forward.

Nasi Jazayeri, EVP & GM, Public Sector 

Salesforce perspective: “Public sector organizations want to simplify their technology stack, better engage with constituents, and reduce employees’ administrative burdens while improving employee productivity. With Public Sector Einstein 1 for Service, organizations can implement trusted AI to become more efficient, better manage and harmonize their data, and give employees the tools they need to better serve their constituents, all while driving their mission forward.” – Nasi Jazayeri, EVP & GM, Public Sector 

Availability: 

More information: 

Any unreleased services or features referenced here are not currently available and may not be delivered on time or at all. Customers should make their purchase decisions based upon features that are currently available.

In a recent Salesforce survey, a striking 60% of public sector IT professionals identified a shortage of artificial intelligence (AI) skills as their top challenge to implementing AI.

Why it matters: AI could save hundreds of millions of government staff hours and billions of dollars annually, according to Deloitte. The benefits of AI are only possible if the public sector workforce has the skills to harness the technology. Government agencies are already being directed to implement guidelines and build teams to support the use of AI. This includes expanding and upskilling their AI talent and designating a new Chief AI Officer, which every federal agency was recently ordered to hire.

Salesforce perspective: “Training and skills development are critical first steps for the public sector to leverage the benefits of AI. By investing in new skills like prompt development, public sector leaders can empower their workforce to use AI to increase productivity, build deeper relationships with constituents, and improve the quality of public services.” – Casey Coleman, SVP, Global Government Solutions

By investing in new skills like prompt development, public sector leaders can empower their workforce to use AI to increase productivity, build deeper relationships with constituents, and improve the quality of public services.

Casey Coleman, SVP, Global Government Solutions

The Salesforce research found:

Public sector faces a deeper AI skills gap than other industries

IT professionals in the public sector are about a third more likely to say there’s an AI skills gap in their organization, compared to the industry* average.

Public sector IT professionals struggle with implementing AI in their organization

AI brings opportunity for efficiency gains in the public sector

By bridging the AI skills gap, organizations can‌ create new efficiencies in the public sector. Salesforce’s survey shows that the public sector’s ​​main goal with AI is to automate routine tasks.

Read more

*Methodology: In partnership with Vanson Bourne, Salesforce conducted a double-anonymous survey of 600 IT professionals (200 IT leaders and 400 IT individual contributors) in Australia, France, Germany, the United Kingdom, and the United States. Respondents work across industries, including technology, financial services, media and entertainment, manufacturing, retail, healthcare, the public sector, and more. The survey was fielded in December 2023 and January 2024.

Salesforce today released new bug bounty learning content on Trailhead, Salesforce’s free online learning platform. This content provides the resources for any company to build their own bug bounty program as the cyber security landscape rapidly evolves. 

Why it matters: Bug bounty programs, which provide financial rewards to ethical hackers who discover software vulnerabilities, are an effective way for companies to gain insights into bad actors and stay ahead of evolving AI-powered security threats.

Go deeper: The bug bounty series on Trailhead breaks down the process for developing programs into bite-sized learning, including:

The bigger picture: From the volume of identified potential vulnerabilities to the firsthand intel on how hackers are using AI, bug bounty programs offer substantial ROI for organizations. Salesforce’s program, for example, has awarded over $18.9 million in bug bounties since 2015 to its ethical hackers, who have reported nearly 30,600 potential vulnerabilities.

Salesforce perspective: “As a trusted advisor to our customers, we share security tools and information they need to be successful. By providing the resources they need to establish their own bug bounty program and engage with ethical hackers, we are empowering companies to increase customer trust in the age of AI,” said Brad Arkin, Chief Trust Officer.

By providing the resources they need to establish their own bug bounty program and engage with ethical hackers, we are empowering companies to increase customer trust in the age of AI.

Brad Arkin, Chief Trust Officer, salesforce

The Trailblazer perspective: “As the cybersecurity landscape continues to evolve rapidly, Trailhead has been an incredible resource to continually learn new skills. Having a playbook to seamlessly set up a bug bounty program will unlock new capabilities and reshape how BACA Systems thinks about strengthening security practices,” said Andrew Russo, Salesforce Architect, BACA Systems.

Learn more:

Salesforce today announced AI-powered enhancements to its MuleSoft automation, integration, and API management solutions that help business users and developers improve productivity, simplify workflows, and accelerate time to value. 

MuleSoft’s Intelligent Document Processing (IDP) helps teams quickly extract and organize data from diverse document formats including PDFs and images. Unlike other automation solutions, MuleSoft’s IDP is natively integrated into Salesforce Flow, which provides customers with an end-to-end automation experience. Additionally, to speed up project delivery, MuleSoft has embedded Einstein, Salesforce’s predictive and generative AI assistant, in its pro-code and low-code tools. This empowers users to build integrations and automations using natural language prompts directly in IDP, Flow Builder, and Anypoint Code Builder. 

Why it matters: Most IT teams are overloaded with project requests. Last year alone, these teams saw requests rise by an estimated 39% — making it difficult for developers to keep up with the pace of business-critical work. Fortunately, generative AI makes it possible to automate many processes, which is a key reason why 86% of IT leaders believe the technology will soon play a prominent role in their organizations.

What’s new in automation:

What’s new in MuleSoft integration and API management:

Salesforce perspective: “Developers are on the frontlines of implementing AI. But to unlock this exciting technology’s full power at scale, organizations need to activate their business users to participate in this implementation. With Einstein powering MuleSoft automation and integration products, every team across an organization can use AI to build and drive seamless customer experiences.” – Vijay Pandiarajan, VP, Product Management

But to unlock this exciting technology’s full power at scale, organizations need to activate their business users to participate in this implementation.

Vijay Pandiarajan, VP, Product Management

Customer perspective: “We have fewer resources and increasing customer demands, so we must find ways to improve our processes and increase productivity wherever we can. AI with integration and automation can help us do this. It’s table stakes now. MuleSoft and Einstein can help us make the most of our data securely to generate relevant outcomes, and overall allow us to innovate faster.” – James Grover, VP – Director of Software Engineering and System Development, at BankUnited

Availability:

Learn more:

Any unreleased services or features referenced here are not currently available and may not be delivered on time or at all. Customers should make their purchase decisions based upon features that are currently available. 

Editor’s Note: This excerpt from Salesforce Futures Magazine explores the diverse applications of personal AI agents. It also reveals why digital companions aren’t just technological marvels helping with basic tasks — they are becoming essential tools in addressing some of humanity’s most pressing challenges.

The transformative promise of personal AI agents — artificial intelligence systems designed to assist people with everyday tasks like scheduling a doctor’s appointment, writing an email, or recommending a book — lies in their ability to compensate for human limitations. Anyone who’s struggled to finish one task during a busy day, let alone juggle many of them, can understand why this would be useful.

Infinite interns + patience 

Imagine an army of infinitely patient interns who stand ready to work on your behalf. These interns can either collaborate alongside you in the flow of work, or they can labor independently, periodically checking in to ensure they’re on the right track. The more they work with you, the smarter they get, learning how you work and think across modes and contexts. Think about a world where all of us have the kind of expert staff currently enjoyed by CEOs: gifted helpers who learn our preferences, understand our goals, engineer outcomes, and specialize in doing all of the things we don’t want to do.

Tech blogger and consultant Venkatesh Rao encourages us to think about machine intelligence as fundamentally different from human intelligence, particularly when it comes to “attention.” Our personal agents will have endless patience for tedious, detailed tasks (think taxes, paperwork, applications, and more) that sap our attention and, occasionally, our desire to endure the human condition. Agents have the potential to remove this burden.

Personalization

Greater contextual intelligence and more persistent memory suggest agents will provide personalization that’s far greater than what we see today. Itai Asseo and Phil Mui, who work on AI research and development for Salesforce, encourage us to think about personalization in three categories: “know me,” “inform me,” and “empower me.”


Meet Einstein Copilot, a new customizable, conversational, and generative AI assistant for CRM.

Einstein Copilot can answer questions, summarize content, create new content, interpret complex conversations, and dynamically automate tasks on behalf of a user, all from a single, consistent user experience embedded directly within Salesforce’s #1 AI CRM applications.


In the “know me” category, agents keep your goals in mind, analyze your performance, and adjust to your unique style. Because every interaction with personal AI will be remembered‌ — or stored as a state‌ — ‌and factored into future use cases, a flywheel effect takes hold: the more you use your agent, the better it gets at anticipating your needs in an intuitive way.

In the “inform me” category, we consider the possibility that agents could use their contextual intelligence to help guide and prioritize our attention and separate signals from noise. This has clear applications in the personal productivity space, but the implications on the consumer side are no less significant. Imagine an agent who helps you switch on “Zen Mode” and other filters so you can better tune your environment, and even remove things from your plate by acting on your behalf.

Finally, the “empower me” function, which speaks to personal AI’s ability to serve as a coach/mentor, or even a manager.

If personal AI can deliver such control, it will forever change the relationship between customers and companies by raising the bar for direct relational engagement.

Natural interactions

Recent demos by startups Humane and Rabbit have started to tangibly articulate what agent-based offerings might look like.

Common early use cases include ordering food, finding gifts for loved ones, planning trips and events, and scheduling appointments. What these demos hint at is a more fluid and flexible collaboration between humans and machines, especially when it comes to attention. Both offerings promise less tapping and scrolling and more focus.

Both products also rely on conversational interfaces to do this, but we think voice will be only one of the ways people interact with their agents. Many proto-agents feature travel in their demos, but solving a multi-part travel puzzle that includes schedules, price comparisons, and airline and hotel preferences using only voice commands is less than ideal. Liz Trudeau of Salesforce Design encourages us to think about a more practical alternative. “Think about a collaborative interaction, a flexible interface that adapts to the task at hand and the job the user is trying to accomplish,” she said. 

Think about a collaborative interaction, a flexible interface that adapts to the task at hand and the job the user is trying to accomplish.

Liz Trudeau, Salesforce Design

We like this concept, “UI on the fly,” because it emphasizes a truly responsive interface that adjusts accordingly as needs change. You can get a glimpse of what this multi-modal future might feel like in the much-discussed Google Gemini demo depicting the planning of a birthday party.

In aggregate, these developments point to futures where tools are easier to use and it’s easier than ever to get things done.

Advanced skills and learning

In 2014, when Amazon launched its original Echo, the device was little more than a bluetooth speaker. Adding the Alexa assistant promised to turn the Echo into something else entirely: a conversational smart home hub, particularly as the Alexa Voice Service SDK toolset expanded the library of skills available to consumers. Alas, most Echos are still used mainly as speakers and the skills revolution has not happened. Nevertheless, the concept of a core agent and a library of additional skills offers a preview of what we might see in an agentive world, hopefully with results far more impressive than Alexa 1.0.

The ability to train agents on data sets (including proprietary ones) means people can far more easily create useful personal AI tools for others based on specialized knowledge. This, in turn, empowers people to build more agents. Already, we can see Open AI allowing users to rapidly build their own GPTs, enhanced with advanced skills based on additional “instructions, extra knowledge, and any combination of skills.”

These advancements suggest a forthcoming Cambrian explosion of agent skills and capabilities, personalized for an infinite variety of tasks. We know this would transform how companies interact with customers. What we don’t know is which sectors will be transformed first. 

Unmet needs

A key element of Clayton Christensen’s theory of disruptive innovations is that early disruption will come from offerings that target the unmet needs of customers at the bottom of the market, hitherto uneconomical to serve. Often, these offerings appear inadequate to the mainstream of the already served, but they can grow and improve from tiny seeds. Looking at the emerging agent landscape, we can see an example of this dynamic in the wave of AI companions, such as those provided by Replika, character.ai, and Baidu’s Wantalk.

These companions, chatbots, and avatars serve the needs of the lonely, the elderly, and those who cannot afford costly therapy. While companions can generate an “ick factor” response among mainstream audiences, observers such as Andreesen Horowitz have identified how the “companion stack” may hold clues as to the future directions of the larger agent space.

For example, we may see agents that super-empower individuals, catalyzing a new wave of entrepreneurship in developing countries by serving needs that could never have been met before, and thereby generating a similar impact to that of mobile phones a generation ago.

Conclusion

The evolution of agents will be determined by consumer preferences and progress against technical challenges, but market dynamics and the way businesses and consumers balance trade-offs will play an equal role. In other sections of Futures, the new magazine from Salesforce Futures, we look at how agents work, whether or not agents represent a viable new business category, and some possible and plausible AI futures and their implications. 

For a deeper dive into the world of personal AI, check out the magazine here

Einstein Copilot for Tableau accelerates users with self-service analytics, streamlines analyst workflows, and unlocks strategic data insights for users within the entire organization

Today, Salesforce announced the beta availability of Einstein Copilot for Tableau, a new capability designed to help users in every role and function explore data with AI assistance. 

Businesses tend to distribute insights from data in reports and dashboards created by expert analysts. Dashboards created in Tableau are visual and interactive, allowing users to adjust scope by exploring predefined guided paths. But sometimes a user doesn’t find the answer to their question in a dashboard. They need the ability to perform their own exploration of the data without the prerequisite for deep analytical training. With Einstein Copilot for Tableau, users can dive deep into their data, utilizing Tableau’s powerful analytical engine through natural language to query and derive rich insights from data sources like spreadsheets, cloud and on-premises data warehouses, and Salesforce Data Cloud

Einstein Copilot for Tableau enables customers to increase productivity and uncover deeper data insights through a guided, natural language-driven analytics experience.

Organizations in every industry are searching for efficiencies, better decision-making, and are asking their teams to use new tools that leverage AI.

Einstein Copilot for Tableau makes data analysis accessible to every business user, and even suggests questions to users based on analyzing the business data and metadata, helping reduce the number of change requests and updates needed from data analysts, often resulting in faster data-driven decision-making.   

Einstein Copilot for Tableau also leverages the Einstein Trust Layer, giving business and data teams robust tools to help protect data and limit exposure to third-party models. Unlike other AI orchestration engines, the Einstein Trust Layer does not retain customer prompts or the LLM’s responses. This helps customer and proprietary data remain private.

Following Salesforce’s recent introduction of Einstein Copilot, Tableau’s new AI assistant is designed purposely for analytical use cases. Einstein Copilot for Tableau features include:

With Einstein Copilot for Tableau, novice analysts and data-curious users can create analytical views and dashboards without learning complex calculation syntax.

Salesforce perspective: “Every employee, in every function, must develop fundamental data skills to be successful in the modern enterprise,” said Ryan Aytay, CEO, Tableau. “Einstein Copilot for Tableau streamlines that skill development, helping anyone become experts at understanding data, and enables everyone in the business to surface insights more quickly with trusted AI. Now everyone’s a data expert!” 

Einstein Copilot for Tableau streamlines that skill development, helping anyone become experts at understanding data, and enables everyone in the business to surface insights more quickly with trusted AI. Now everyone’s a data expert!

Ryan Aytay, CEO, Tableau

Analyst perspective: “Generative AI has the potential to truly revolutionize how insights are accessed and interacted with across the business, but organizations want assurances that the data can be trusted,” said Doug Henschen, vice president and principal analyst at Constellation Research. “With trust guardrails in place, gen AI can evolve from an ambition to a super-charged tool for business.”

As Einstein Copilot capabilities evolve, customers of Salesforce and Tableau will be able to ask natural language questions of data, visualize insights, and turn those insights into action that creates better experiences for customers. More information will be shared at Tableau Conference, April 29-May 1, in San Diego.

Availability: Einstein Copilot in Tableau is currently available in beta for limited customers, and will be generally available this summer. 

Learn more:


Any unreleased services or features referenced in this or other press releases or public statements are not currently available and may not be delivered on time or at all. Customers who purchase Salesforce applications should make their purchase decisions based upon features that are currently available. Salesforce has headquarters in San Francisco, with offices in Europe and Asia, and trades on the New York Stock Exchange under the ticker symbol “CRM.” For more information please visit https://www.salesforce.com, or call 1-800-NO-SOFTWARE.

Today, Salesforce announced it has been named a Leader in the IDC MarketScape Report: Worldwide Enterprise B2B Digital Commerce Applications 2023-2024 Vendor Assessment¹. Salesforce was evaluated for Commerce Cloud, which recently added new features including generative AI and data innovations for enterprise B2B commerce customers.

Why it matters: B2B ecommerce grew 17% in 2023, and 7 out of 10 B2B buyers see online buying as more convenient.

The report notes:

Consider Salesforce Commerce Cloud if your organization is primarily focused on differentiating via business agility, AI, and data, and deep relationships from a platform that is very business-user friendly.

The Salesforce perspective: “B2B businesses grow with Commerce Cloud because it delivers on the trust, agility, and innovation customers need to win in the AI era,” said Michael Affronti, SVP and General Manager of Commerce Cloud. “With trusted AI capabilities embedded across every buyer touchpoint, integrated data from Salesforce Data Cloud that powers AI, automation, and insights, and one of the most robust partner ecosystems in the world, Commerce Cloud is helping B2B companies drive stronger customer relationships and profitable growth every day.”

Innovation in action: Salesforce continues to invest in B2B commerce innovations to help global companies drive profitable growth and efficiency while meeting customer expectations. Some of the latest enterprise innovations in Commerce Cloud include:

Additional information:

¹IDC MarketScape: Worldwide Enterprise B2B Digital Commerce Applications 2023–2024 Vendor Assessment (Doc #US49742523, December 2023)

Today, the demand for skilled professionals in artificial intelligence is higher than ever. According to new Slack research, AI use in the workplace accelerated 47% in the past quarter. As of January 2024, more than 1 in 4 (28%) UK desk workers reported having tried AI tools for work, compared with 1 in 5 by September 2023.

With generative AI creating new job opportunities as it transforms industries, this demand is only set to increase. According to IDC, the Salesforce ecosystem — fuelled by AI — could generate over $41B in economic benefits and create over 500K jobs in the UK by 2028.*

Globally, IDC reports that 50% of companies have already hired data engineers over the last 12 months, 43% have hired business analysts, and 41% have hired AI solution architects. When asked which AI roles businesses plan to hire in the next 12 months, top-ranked roles include data architects (50%), AI ethists (43%), AI solutions architects (41%) and machine learning engineers (39%).*

To make the most of the opportunities that AI presents, the UK needs innovative solutions to address its digital skills crisis. Crucially, these must be inclusive to unlock the talents of women and diverse groups.

Bridging the AI skills gap

As companies roll out their AI strategies, it’s crucial that upskilling and widening accessibility is top of mind. UNESCO has warned that the failure to close the gender gap is self-perpetuating, and risks us being left with an economic and technological system with a massive underrepresentation of women.

At Salesforce, equality is a core value, and we have developed partnerships with governments, public sector organisations, and nonprofits to provide upskilling opportunities to help address the AI skills gap.

These learning opportunities are delivered via Trailhead, Salesforce’s free online learning platform, and through various workforce development programs, expert-led training events, self-paced e-learning courses, and certifications for jobs in the Salesforce ecosystem. Trailhead can guide individuals with limited technical knowledge into Salesforce roles within six months and has expanded its content to include AI-specific skills training.

Supermums leading a charge

An organisation that showcases how women in particular can be empowered to thrive in new and emerging technology jobs is Supermums.

Supermums is a social enterprise which aims to democratise opportunities for women to enter the world of technology and Salesforce.

When its would-be founder Heather Black became an accidental Salesforce administrator for her non-profit, she enjoyed it so much she decided to upskill as a Salesforce Consultant, helping like-minded organisations to implement a CRM.

As a mother of two, Black began to think about how Salesforce enabled her to stay working, and how her career path could work for other parents. In 2016 she launched Supermums to bring a diverse range of women into tech.

For Black, Supermums is more than a training programme. It’s a movement empowering women to realise their full potential in tech.

“As demand for AI talent grows, our mission becomes even more vital,” said Black.

“With Supermums, we’re not just bridging the AI skills gap; we’re shaping a future where everyone thrives.”

Heather Black, Founder and ceo, supermums

By providing accredited courses and ongoing support, Supermums enables women from diverse backgrounds to pursue flexible, well-paid career opportunities in technology. It also serves as a bridge for women seeking to re-enter the workforce by offering tailored training programs and support networks. 

Supermums has now trained over 1,000 individuals and over 200 companies have used it to hire talent. Last year it launched its first AI course.

Supermums at Salesforce Tower London

Using AI for good

Once in employment, the programme graduates have thrived across different sectors. Ana-Maria Guzu, a Salesforce Administrator at NHS – Birmingham Children’s and Women’s Hospital Charity Trust, explained that “AI initially seemed like a mystery to me” but discovering Salesforce “was like finding a new world of endless possibilities. I dove into the Salesforce ecosystem, completed the admin course, and instantly saw its potential to make a positive impact. Wanting to use technology for good, I found my way to the NHS.”

“With each lesson, I’m not just learning about AI; I’m discovering how it will shape my journey of using tech for the greater good.”

Ana-Maria Guzu, supermums alumni, now a Salesforce Administrator at NHS – Birmingham Children’s and Women’s Hospital Charity Trust

Margaret Vining, Supermums alumni and now Salesforce Administrator at Agility Technologies Inc, is excited at how AI is changing the world of work.

“The impact of AI is like when people started working on PCs in the 1980s. It enabled a fundamentally different way of working. Some people resisted but now it’s everywhere. Today we need to get comfortable with this new wave of AI. It’s a tool, which makes people better at their jobs. In my role, tasks that would have taken weeks can now be done in minutes.”

 “AI truly enables limitless possibilities, and we are just starting to see the value it can bring, both in our personal and business lives.”

Margaret Vining, Supermums alumni and now Salesforce Administrator at Agility Technologies Inc.

AI can only be fully leveraged when people are skilled to effectively use the technology.  

Salesforce is committed to equitably equipping people with the tools to take on jobs that our digitally transforming economy demands, working hand in glove with forward-thinking organisations like Supermums, alongside our industry peers. When we work together with inclusion at the heart, everybody wins.

Learn more:

*IDC Infographic, sponsored by Salesforce, Salesforce Economic Impact, doc #US51404923, December 2023

Salesforce has announced new features for Service Cloud that provide agents and supervisors with AI-powered insights, content generation, and automation capabilities to help increase customer satisfaction, grow loyalty, and help transform contact centers into revenue generators. Powered by Salesforce’s Einstein 1 Platform, service leaders can now use Data Cloud and Einstein to identify recurring issues, recommend next steps based on customer feedback, and monitor service conversations to suggest ways agents can resolve cases faster.

Why it matters: Good customer service can drive billions in new revenue*, yet people often have disconnected experiences when dealing with customer service agents. Sixty percent of people surveyed in Salesforce’s State of Service report say it feels like they’re communicating with separate departments during a service call, and 66% often have to repeat or re-explain information to different representatives. 

What’s Coming: New AI and data innovations in Service Cloud include: 

The Salesforce perspective: “Customers are right to expect smarter, faster experiences in this AI era. Salesforce’s new innovations empower contact centers with real-time data and trusted AI to resolve cases and provide proactive, personalized service when and how customers want it — and sometimes even before they ask for it.” – Ryan Nichols, Chief Product Officer, Service Cloud 

Salesforce’s new innovations empower contact centers with real-time data and trusted AI to resolve cases and provide proactive, personalized service when and how customers want it — and sometimes even before they ask for it.

Ryan Nichols, Chief Product Officer, Service Cloud

Reaction to the news: “At Sonos, we intend to use the new innovations at Salesforce to improve the agent and customer experience in the future by utilizing AI to handle mundane tasks, allowing our agents to focus on building customer loyalty and driving revenue.”Dharam Rai, Vice President, Customer Experience, Sonos

Partner ecosystem extends the power of the intelligent contact center: Salesforce has an extensive partner ecosystem that provides capabilities for service organizations of all sizes and in any industry. Amazon Web Services (AWS) will bring Amazon Connect Chat, Amazon Connect forecasting, capacity planning, and agent scheduling to Service Cloud. This together with generative AI will power unified customer experiences, more productive agents, and more informed supervisors. Genesys will help orchestrate personalized conversations across all channels while providing service leaders with workforce performance insights to inform organizational decisions.

Availability:

Learn more:

Any unreleased services or features referenced in this or other press releases or public statements are not currently available and may not be delivered on time or at all. Customers who purchase Salesforce applications should make their purchase decisions based upon features that are currently available.