3 Ways Generative AI Will Help Marketers Connect With Customers
3 min read
Recently, I asked my 16-year-old son if he had heard of ChatGPT (“Um, yeah, dad.”) and I wondered if he was using it at school. He said it helps him generate templates or starting points for essays. I then asked him if ChatGPT makes him smarter. No, he said, just faster. So it made me curious whether I could apply generative AI (not just ChatGPT) to my own work. Would it make me faster or more creative, and how?
According to a recent Salesforce study, 40% of desk workers don’t know how to effectively use generative AI at work. And a study by NNG shows that, on average, generative AI tools increase business users’ throughput by 66% when performing realistic tasks. I don’t know about you, but a 66% increase in productivity sounds pretty appealing to me!
Here’s a typical day for me: After making a fresh cup of coffee, I dig myself out of a pile of Slacks and emails. I go from meeting to meeting and often get to do my “real work” only late at night when I have some quiet time. My team members have reported similar experiences. The work can pile up and there’s only so much time in the day. So we decided to experiment with using what we call an AI assistant.
As Salesforce’s Ignite team, we are an innovation consulting group. We help our most strategic customers tackle their biggest challenges. With that in mind, we decided to ask ourselves three questions:
For our experiment, we used a range of publicly available AI tools. We measured the time savings and scored each result on a scale of 1 to 10.
On average, it takes our Ignite teams eight to 12 weeks from onboarding to our final readout. There’s no time to waste. So, the first 48 hours are crucial to understanding our customer’s business, their strategic context, and competition. But what if we can cut that time to only 48 seconds?
Whether you’re just starting out with AI or already innovating, this guide is your roadmap to delivering a trusted program that brings together data, AI and CRM. The goal? Helping your teams focus on high-value tasks and build stronger customer relationships.
Large language models (LLMs) are very good at summarization. Wouldn’t it be great if we could get an LLM to do all the secondary research for us and summarize it so we can get up to speed faster? That’s where it gets a bit tricky. What we found is that there are a few challenges you might run into:
Cutoff dates: All of the LLMs have a cutoff date. For example, as of this writing, ChatGPT can look up to only September 2021 and Claude2 to December 2022. To get around this problem, you can use plugins for ChatGPT.
Hallucinations: Because the nature of LLMs is to come up with the next word in a sequence, when you give it a task (such as “Look up the company’s most recent investor relations presentation and analyze the trends.”), you’re really asking for what sounds like a correct response. You have to ground and refine your prompts so that they stick to the facts, and don’t make anything up. Even then, you have to double-check the results.
Summary: We got some really great summaries from our trials which helped us move more quickly. But we had to spend a lot of time verifying and refining our prompts. Overall, our experiment, which we called F48, proved promising even though the experience was too clunky to have a lot of promise. It wasn’t quite the 48 seconds we were hoping for – at least not yet.
One of the most important aspects of our work is primary research. It helps us understand our customers’ challenges. Having a research assistant synthesize interview transcripts, identify top themes, and pull out key quotes would be a useful tool in that process.
Typically, we conduct about 15 to 20 – or more – interviews per project. Transcripts are already automated, thanks to tools such as Otter.ai or Google Hangouts. But synthesizing these transcripts can be tedious work. Synthesizing an entire research stream can take many hours up to multiple days. There are a few gotchas, however:
Data privacy: When entering data into ChatGPT, or any of the other LLMs, that data can be used to train that LLM further and even appear in later queries. When dealing with sensitive information such as interview transcripts, it’s imperative that the data is handled accordingly. For our interview synthesis, we leveraged our own Salesforce gateway, which has a trusted layer with a zero-retention policy and data masking. ChatGPT allows users to disable chat history and training, noting on their website that doing so will mean that “new conversations won’t be used to train and improve our models.”
Human context: When getting back our “key quotes,” they were sometimes great, and sometimes less so. Since we grounded the prompts properly, the models rarely hallucinated, but they also didn’t have an instinct for great punchy quotes.
Summary: Overall, we’ve been very happy with our AI research assistant. We were able to get synthesized summaries minutes after we got the transcripts. But it’s no replacement for a human researcher. To get real, useful insights, we still need a human brain. It’s a time saver, but not a replacement.
3 min read
5 min read
When presenting a final vision to our customers, we often have our talented designers whip out some amazing visuals to illustrate what that vision might look like. Sometimes we have hand-drawn sketches, stock photography, or creatively edited images. One of the most exciting possibilities about generative AI is the ability to “imagine” visuals using words, and have them come to life.
We were blown away with the range of results, but, as always, there are areas that can trip you up:
Consistency and control: If you’re telling a story with several visuals and/or characters, you need consistency. Stock photography sites sometimes give you a series of photos with the same character that you can reuse. But tools like Midjourney can vary wildly from image to image. There are ways around it, but they require a much deeper immersion into more advanced features that you can get with Stable Diffusion – such as training a LoRA model with your own photos. Similarly, when describing an image (e.g. “a young man holding a tablet next to a semi-trailer truck”), the pose that emerges can vary from render to render, and it can take a lot of trial and error (and patience) to get what you want. Going deeper into features such as ControlNet and Inpainting can help, but require practice and knowhow.
Ethics and Bias: Since the data is trained on large amounts of data, that we usually don’t even know about, it can have bias embedded into it. It’s important to be aware of these biases when working with AI. For example, typing “medical doctor” on many AI platforms will skew heavily to white males. As the “AI creative directors” we need to ensure we recognize these biases. Similarly, we need to ensure that the sources of the images are ethical. Generative AI technology can “copy” an artist’s style without permission, which poses ethical and legal questions.
Summary: After using generative AI tools to help with the creative process, it’s hard to go back. In some cases, playing around with the tool created inspiration for the final product. The only problem is, it doesn’t make you faster. But of course, as designers, we’ll always fill the time we have to complete a task and try to push the limits of our creativity. But these capabilities give us new tools to add to our arsenal. They can even give you new skills. (I’m a terrible sketch artist!)
With these experiments, it was easy for us to grasp how generative AI can boost our productivity and increase our creativity, even if some of the gains were modest. But for a technology that is still so young, this has great promise. Almost any team can incorporate AI as their assistant to not only work faster, but add skills and even increase creativity – or at least help us get inspired.
Get the latest articles in your inbox.