By Alex Winter
Mar 7, 2024
Subscribe now and get the latest podcast releases delivered straight to your inbox.
AI Safety for Teams: Essential Practices for a Secure Workplace [Endless Customers Podcast S.1 E.14]
By Alex Winter
Mar 7, 2024
Listen on
View the full transcription of this episode.
__
**Note: This transcript was generated by AI and has not been edited for content.
Bob
0:00:00
People still want to do business with people they trust. Use AI, be more efficient with it, be more effective, be more productive, but do not eliminate your perspective, your souls, and your views.
Alex
0:00:10
Welcome back to Endless Customers. My name is Alex Winter, and today we're joined by Bob Ruffalo, the CEO and founder of IMPACT. Bob, welcome back to the show. Happy to be back, my man.
Alex
0:00:18
Happy to have you. And today we're talking about a really hot topic. We're talking about AI. And for those that listen in that view and watch the show, if you go back to episode seven, we talked about how to create a culture of AI within your organization. And I learned a lot from you on that episode.
Alex
0:00:40
But today, we're talking more about the safety around AI and how to make sure that your company is using best practices to do positive things with AI and not negative things. So where's a good jumping off point for us to start this discussion, Bob? Should we talk about some of the negative things that could potentially happen if you're not thinking
Alex
0:00:57
about the safety around AI and setting up your team for success?
Bob
0:01:01
Yeah, yeah, for sure. And we, I think, are well aware of the risks of AI, right? Could it create vulnerabilities for our company with our data and information we're putting into the systems? Can we be at risk of using content and copy that maybe we won't have the rights to use? What's the effect it's going to have on our staff? Are we going to be making so many different changes?
Bob
0:01:27
But the reality of it, the way I'm looking at it, is that all these risks that exist right now, and this might be controversial, but my perspective of it right now is it's no different than the same risks that have been out there since the beginning of the internet. We're looking at it could be like data breaches, could it be plagiarism, all this stuff. It's all the same, it's been around forever. It's just when social media first rose up, it's like, oh my God, all these new risks
Bob
0:01:56
or cloud computing, all these new risks, right? Now here we are with AI and come on, all the risks are very, very similar. At least that's my perspective of it.
Alex
0:02:05
I've never thought about it that way, but it makes a lot of sense. Whenever there's a new emerging technology, there's always fear around it and how to use it in best practices, but it's still, it's almost like history repeating itself. It really is. You still have the same general confines
Alex
0:02:18
or the same general best practices.
Bob
0:02:20
And don't get me wrong, I'm not saying that those risks are not real. It doesn't mean we don't have to pay attention to them, but again, we've had to pay attention to these for the last 20 years and it's just new technology. It's gonna give us all these new abilities and the risks are still there. They've been there forever.
Alex
0:02:35
Alright, so Bob, you mentioned risks regarding AI. What are some of the downsides that CEOs and leaders should be mindful of and should be thinking
Bob
0:02:41
about or be aware of? Yeah, when it comes to our staff, I mean, are we starting to become too over-reliant on these tools? Even like chat GPT today, are we becoming too over-reliant and, you know, too over-reliant and lazy? We're not checking our work as well as we could. We're actually putting out less quality work. That is something I would say we need to start paying attention to right now, especially if our marketing teams and members of our sales team are starting to use chat GPT in their conversations or in their content. Again, is this really new? I mean, plagiarism has been around for a very
Bob
0:03:10
long time. So, it's true. Be able to just take that copy and not fact check it and just put it on our site. But yeah, I think we probably need to be a little bit more cognizant of it these days and pay a little bit more attention to make sure that we're not doing that and that we're not getting as lazy as we can.
Alex
0:03:25
What other things are out there? What other potential issues could there be aside from plagiarism that we should also be thinking about?
Bob
0:03:30
Well, I think it goes a little bit more with like false information too. So just checking the stuff that you're putting out there. It's well known that AI can hallucinate. It thinks that it's right when it's really not. It sounds very convincing when it produces an answer back to you and says, this is what the answer is, and it's wrong.
Alex
0:03:48
That's really interesting. It's a lot like when you're used to searching Google, and it's like you can't always take it just for fact. You need to do your research, you need to question.
Bob
0:03:56
Again, same thing. Can you believe everything you hear on TV? No. Can you believe everything you read on the internet? No. Can you believe everything AI puts out? No. But it is your job to make sure that your staff knows that,
Bob
0:04:08
and that they are taking the steps to make sure they're fact checking their work and whatever is being produced. Because there's so many examples that are out there in the media of big brands putting out content that is just factually wrong, because they're over-reliant on AI. Again, same problems have been around the internet forever. But it's a little bit more
Bob
0:04:26
the forefront right now, so how you protect yourself, just make sure your staff is fact-checking their work.
Alex
0:04:31
Yeah, totally, and it's relevant too, because even recently in the news, we've been hearing Elon Musk and certain influencers talking about how open AI and Google and certain companies have biases in their algorithms that they need to fix or they need to change, so that plays even more into what you're saying.
Bob
0:04:46
But again, does television have bias?
Alex
0:04:48
Without a doubt. Yeah, what news station do you watch, right?
Bob
0:04:51
Are there websites that have bias? Yeah. Again, it's all been around forever. It's just these tools, if we just say, hey, because Chatshop eats, put it out there, it's got to be true. Well, that's not true. You still have to have the human in the loop and checking that work and modifying that work and make sure wherever we're putting out from our brand, from our company, represents
Bob
0:05:12
the views and perspectives of our company.
Alex
0:05:14
So what is some overarching messaging that CEOs and business leaders should be thinking about when it comes to safety and AI?
Bob
0:05:19
Yeah, I mean, most importantly, you have to have some kind of guidelines, at a minimum, inside your organization. You can have policies if you're more strict, and obviously certain industries have way more compliance they have to adhere to. In those cases, you're gonna have much more strict policies. But at a bare minimum, most of the companies we work with
Bob
0:05:38
can get away with just having some very, very simple principles that if you just put these in place right now, everyone in your staff can remember them, they can pay attention to them, and it's gonna do a lot of good for your company because you're gonna be protecting against, you know, like Pareto principle, you're gonna be, you know, 20% is gonna take care of 80%
Bob
0:05:55
of what you actually have to worry about.
Alex
0:05:57
You just talked about principles, can you list off some of those principles
Bob
0:05:59
and what that could look like? Yep, and we actually use the acronym SAFETY to make sure we're using AI. That's right. We do it safely inside of our organization. So the S in SAFETY stands for secure and that's just a reminder to all of your staff to check out the data privacy policies, the terms of use, whatever is you know the the terms that these tools put out. Just check that first. Make sure that if you're putting any information into any AI system you have to know that it's either a private environment or that you're not putting
Bob
0:06:30
the company at risk. Again, there is a level of trust that you have to have with your employees. If you don't have that level of trust, you're going to have to have stricter policies or you're going to have to turn certain systems off. But if you really want to create a culture where people are experimenting with AI and pushing the boundaries and
Bob
0:06:45
creating a culture that really fully leverages this, you have to have your staff, you have to give them some trust in these tools, but give them the guideline to just remind them, always look at the terms before they start putting any information to any of these systems.
Alex
0:06:58
Yeah, that makes a lot of sense. Especially for certain industries, you know, if you're in the medical field with HIPAA laws, like not disclosing certain information that may be really sensitive and that shouldn't be out there. But that also plays into like, that's always been that way. So it's just being mindful of these things as you start to experiment and
Bob
0:07:14
learn and grow. So the next one, A, stands for assistive, not autonomous. Meaning this is a tool that's meant to help our staff be more productive, be more creative, do better work. It's almost like providing assistance to all of our employees to help them get their work done better and faster. It's just like if you have assistants, you still have to
Bob
0:07:37
check their work. You can't just take their work and just say, oh, it's done, and just pass it on, because you don't know if that work is right or not. And we just talked about this earlier. Chances are, it's going to be wrong once in a while. So it's still your job to be the human in the loop, checking whatever AI is putting out.
Bob
0:07:53
And again, this guideline is a reminder to your staff, check all the work before you put it out. Gotcha. That makes sense. And the F is for fact checked. That one's pretty obvious. Check the facts and make sure the work is correct.
Alex
0:08:07
Absolutely. Always got to double-check it.
Bob
0:08:09
And the E is for experiment. The reminder we want for our entire staff is we want you experimenting with AI and trying tools, finding ways to do your job better, faster, more productive than you've ever done before. We've talked on this episode, on this show before, will AI take your job? Dharmesh at Inbound last year said,
Bob
0:08:31
yes, AI will take your job and it'll give you a better one. And the only way we're gonna have better jobs is if we are experimenting with AI and trying new things in all of our work. That should be the expectation coming from the company if we truly wanna have a culture of AI. And to those that are gonna resist it, that's where we need to be a little bit more nervous.
Bob
0:08:49
And if we're not experimenting, we're becoming outdated, we're becoming obsolete as employees, and that's not what we want. Totally, and if you create that culture,
Alex
0:08:56
it promotes the possibilities to experiment and the excitement to experiment versus the fear. And I think, like anything, most of the fear comes from fear of the unknown. So the more that you can experiment, the more that you can use these tools, the better you're gonna be able to leverage them to the full effect and help you do your job better, faster, stronger,
Alex
0:09:17
all those things.
Bob
0:09:18
Yeah, that's right. And the T, the T in safety, stands for transparent. So this is a reminder of being honest and open with whoever is involved, whatever stakeholders that are involved, your usage of AI. That could be if you're using AI to create your content or videos, or if you're using deep fake videos or anything like that, like a tool like Haygen,
Bob
0:09:40
you're putting out videos, you really want to still retain that trust with your audience, then just admit that this is a video we used. You know, we created an avatar, but it's helping us be more efficient, but the quality's still there, and you're just being honest about that.
Bob
0:09:54
Are you using AI in your content? Are you using it, anything inside the business? Just, you know, being as transparent with your AI usage as possible.
Alex
0:10:02
Totally, and we even do that on the show here. So the transcriptions for this show, for every episode, we use an AI tool. And we disclose that to everybody so that they know that, hey, if there are little mistakes or whatever the case may be, this is through using AI. It's not actually done by humans.
Bob
0:10:14
Yeah.
Alex
0:10:15
Right.
Bob
0:10:15
And the Y stands for your expertise. So your expertise still matters no matter what. And it's a very definitive principle of they ask, you answer, and everything we teach here at IMPACT, that for content marketing to really do well, you have to have a perspective. You have to have a view. You have to have soul in your content. You need to be representative of the company.
Bob
0:10:37
And that's the way people are actually going to care about you, and believe in you, and want to follow you, and want to do business with you. That's how you earn their trust. So if you are thinking you're going to use AI to just do all this for you, you're just going to be putting out content that looks just like everyone else. And it's going to sound like everyone else. It's all going to sound like AI-generated content. It's going to look like AI-generated videos. That's not how you're going to win in this economy. Right. That's not how you're going to build trust. People still want
Bob
0:11:08
to do business with people they trust. That's it. So use AI. Be more efficient with it, be more effective, be more productive, but do not eliminate your perspective, your souls, and your views.
Alex
0:11:19
All right, so we covered the acronym for safety. We understand high level what that is. Now, how do you take that and implement it in your company if you're a CEO or if you're a business leader?
Bob
0:11:28
I mean, it's just like anything else you're rolling out in the company, right? Could it be core values, could it be a purpose, could it be our annual priorities and our goals and our targets. Just like anything else, you just have to roll it out. You have to put focus to it. You have to make it a priority.
Bob
0:11:48
And most importantly, you've got to communicate it and re-communicate it and re-communicate it and re-communicate it. It takes people five, six, seven, 10, 20, 30 times to hear something from them that actually remember it. Why do you think companies like McDonald's are branding doing commercials and billboards as much as they are? Because they're branding you constantly remember McDonald's is here and this is what they have to offer. You as a
Bob
0:12:10
leader, you got to take the same perspective McDonald's does with your employees. You just got to remind them, remind them, continue to experiment, check the terms of service, make sure you're checking your work, make sure that you know you're doing all these things that are in the safety principle. The more you say it over and over and over and over again, the more it'll actually become part of your culture.
Alex
0:12:29
Absolutely, and especially coming from leadership from the top down, hopefully people are listening. You also did something really cool that I wanna talk about inside of our organization, IMPACT, where you gamified it. And you got people really excited about it, and we talked about this in episode seven when we were talking about the culture, but there's also ways to make it fun for people
Alex
0:12:46
and incentivize them to really participate and almost have like friendly competition to accelerate
Bob
0:12:51
that adoption versus having those resistors? Yeah, I mean, you know, what we do is we put a process in for the way you experiment, but you know, if you really want to change behavior, you know, do some positive reinforcement and we decided to do positive reinforcement with a reward. Hey, listen, if you do a meaningful AI experiment here at IMPACT, you're doing something that we're gonna be significantly better in the future. We're talking tens of thousands, hundreds of thousands of dollars, millions of dollars better in the future because you did something.
Bob
0:13:22
And again, it could be AI. It could be any other emerging technology. It could be anything. Finding a better way. So we give spot bonuses. If you do that, and you follow the process, and you turn in, and you say, these are the results that I've achieved, and we say it's meaningful, yeah, we want to give bonuses.
Bob
0:13:40
We want people experimenting and pushing the boundaries here.
Alex
0:13:43
I love that.
Alex
0:13:44
So Bob, this reminds me of something that you say a lot. And I agree with you, I think it's important how important middle managers are within your organization. And that they're really the backbone of your organization. So how does that play into what we're talking about?
Bob
0:13:56
Yeah, yeah, I mean if you're really gonna take these policies, again, you can have a CEO top of the organization that's shouting this from the rooftop and saying, experiment with AI, check the policies, do all this. But right, it's still only one person that's doing that. You know, when you have a culture where you are cascading down these leadership principles and these guidelines, you're really trying to embed throughout the entire company inside the culture, that means you have to have middle managers.
Bob
0:14:21
You know, every frontline employee, their direct manager is fully bought in, and they are constantly reminding and bringing this to the forefront and asking in their one-on-ones, have you experimented with AI in the last couple of weeks? What have you learned? What have you done? Have you checked the policies, right? Just constantly reinforcing this with every employee,
Bob
0:14:41
and that's such an important part. So working with your middle managers, make sure you have great middle managers, but also make sure that they're doing their job of taking what leadership has decided this is a priority, and bringing it all the way throughout the organization.
Alex
0:14:53
Like anything with business, not everything is perfect. Not everything goes to plan. Any real business owner knows that. So what happens when a mistake is made? Or what happens if sensitive data is breached, or something gets released that it shouldn't have, or you put out a deep fake or false information, whatever the case may be, what is the emergency plan,
Alex
0:15:11
or the stopgap plan, to fix something like that,
Bob
0:15:13
or at least learn from it and grow from it. You fire the employee, you never talk to them again, you deny it till you're blue in the face. No. No, I mean, listen, it could be AI, it could be anything. Right, it could be an HR issue, it could be something that will lead to the media, whatever it is, businesses have been dealing with crises
Bob
0:15:33
since when?
Bob
0:15:34
Since the beginning of time. Beginning of time, right? So again, is this anything new? No, it's not. Having a really good crisis management plan, first and foremost, if you don't have that in place, I recommend that you put some time and say, OK, and re-review your crisis management plan every single year.
Bob
0:15:55
Because new things are going to come up. New technology is going to come up. New leaks, whatever might happen, you need to be re-reviewing your crisis management plan. If you don't have one, we'll go into a couple of recommendations. I am far from a crisis management expert, but I do know a few things that we put in place here at IMPACT and we would recommend to our clients. And again, if you're really worried about this, if you're in an industry that has strict compliance,
Bob
0:16:19
then you might want to talk to somebody more professional. And hopefully this episode is helping you say, well, we need to get our crisis management plan in place. Totally. Yeah, and it's important to have within your processes to think you want to plan for the best, or hope for the best rather, but you also have to plan for the worst. And we always joke about the like, hey, if Alice gets hit by a bus tomorrow, where's the process? Who can come in and pick up the pieces and is it going to come to a screeching halt because Alice got hit by a bus? So you need to think about those things? Well I have my vestige group guy who runs the bus company so he likes he prefers for me to say if get hit by a train instead of by a bus. Gotcha. In case
Bob
0:16:56
he's watching this episode down here. Yeah Datco has a great reputation they're good. Nobody's getting run over. So you know okay what's a crisis management plan and how can this apply to AI? First off if there's is an issue some kind of breach some kind of you know find it identify it and start you know taking steps to turn that off right away. So whatever's causing that breach, first thing you need to do is you need to stop that. From there, the next step is to assess the impact
Bob
0:17:21
and spend a few moments to say, okay, what happened and how big is this and what do we have to address? Number three, you gotta be as honest as possible can. If there was anyone that was affected, you have to notify the affected parties. Again, we've all probably been part of some kind of credit card or social security breach, right?
Bob
0:17:37
So we've all gotten notifications like that. You gotta be honest about it. The last thing you wanna do is hide any of that stuff.
Alex
0:17:43
Yeah, and businesses make mistakes, it happens. And I think you're judged on how you handle those mistakes and how you address them openly, forwardly, and honestly. And if you try to hide from them, it doesn't really leave a good taste in people's mouths. So it's important to continue that. And that's very they ask, you answer. I mean, that's one of the major principles
Alex
0:17:59
of what we practice here.
Bob
0:18:01
Next, you gotta make sure you're working with your staff. What directions are you giving them on how to handle this? How to handle if there's media that's going to come at them or other questions they may face or what steps they need to do. So make sure there's a plan in place internally on what you want your staff to do in this situation. That makes sense.
Bob
0:18:19
And then the last two steps here is, OK, now we've got to start taking steps to remediate. So whatever that might look like. Listen, could you get, like if there's a social security breach or a credit card breach, are you ever gonna get that information back? No, but there are steps you can take
Bob
0:18:34
to make sure that you're addressing in the future. So with AI, again, what are we gonna do to make sure this never happens again? Is there damage that we can undo? Maybe, maybe not, but we gotta be as careful as we possibly can and maybe just put the alerts to wherever we need out there, so we're monitoring should that whatever happened
Bob
0:18:54
cause longer trail issues down the road. And it's really just retro on the entire situation. That's the last step. How do we make sure that this never, ever happens again?
Alex
0:19:03
Yeah, absolutely. You gotta learn from your mistakes. So Bob, I know how passionate you are about your business, and I know most CEOs that I meet are the same. It's like they're children, it's like they're baby, right? So what's an AI concern that keeps you up
Bob
0:19:15
at night, that maybe you're losing sleep over? Yeah, again, my views are probably more along the lines that the risks associated with AI, although some of them might be unique and new, the far majority of them are the same risks that we've had ever since the internet introduced itself into the business world. So my fears and worries are not so much around the risks that AI and the tools and the usage of AI might present for our business. My fear is much more around us and the businesses
Bob
0:19:51
that we work with not going fast enough with AI, not experimenting enough with AI, not trying it enough. Because if you're not, somebody in your industry will. And if somebody else in your industry does it, does it great and does it much better than you, you're gonna lose market share, you're gonna lose trust, perspective, reputation. You're gonna be the bloated company
Bob
0:20:16
when somebody's the more efficient and more profitable company that has a better product, that does things faster for clients and you're just the slow slug in the industry. That would be my biggest fear, and that's my biggest fear here at IMPACT, is that we're not going fast enough. I know a lot of the other companies in the marketing space
Bob
0:20:35
and they're such geeks when it comes to this stuff. Now, could they also be going so geeky that they're getting them down rabbit holes? Yeah, sure, but there will be companies that do incredible, incredible things with AI and really transform their entire industry. I want to be one of those companies. I want my clients to be those companies.
Bob
0:20:56
I don't want to be the slug of the industry. And I think we could all look back, and it's so common, people say, oh, I had the idea for Facebook. I had the idea for this. It's cool, but you never did it. Mark Zuckerberg did it, not you, right? And I actually remember being in college and while I'm drinking beers and
Bob
0:21:16
having fun and partying, I remember having AOL instant messenger profile and thinking like, oh, when you get the bio page, it could be so much more cool if these bio pages were more interactive and you could post longer things and they could post pictures and stuff on there. Like, the idea was in my head, but I never went and built it, I never did anything with it. Somebody else did. And that other person became one of the richest people on the face of the planet.
Bob
0:21:43
AI presents the same exact opportunity right now for our businesses. Can we think about how we can be leveraging this and taking the things that everyone's been doing in our industry, the norms, just like those AOL profile, buddy pages, whatever that was. MySpace pages or whatever. Taking the norm that's out there,
Bob
0:22:02
and that's what everyone gets, right? But saying, how can this be better? How can we be doing things better for our customers? How can we be doing things faster for ourselves and for people we serve? And just create such a better product and better experience. AI is presenting that right now for all of our businesses. And my fear is that people that I know,
Bob
0:22:24
people in my organization, people that we serve as our clients, don't go fast enough on that. It's a really interesting perspective. I've never thought about it that way, but it comes back to perspective. You either look at it as an opportunity or not, and the ones that embrace it as an opportunity and find these new ways to leverage these new tools are going to be on the forefront of what's to come and what's happening. That's right.
Alex
0:22:45
Wow, very cool. So for CEOs and business leaders, what's the next step that they can take? They listen to this podcast, Bob. What do you recommend they go do right now?
Bob
0:22:52
You weren't kidding about your Boston accent coming out today, you said that before. Is it bad? No, it sounds great, sounds great. For next steps, next steps. You have to get your entire organization aligned around the guidelines, around the process, around AI being important,
Bob
0:23:07
and now it's part of everyone's job. And by the way, when you experiment with it, there are rewards for doing that. So, and the best way of doing that is by hosting a company workshop. You gotta get everyone in your entire organization fully aligned, that AI is important, it's here to stay, and we are taking this seriously inside of our organization.
Bob
0:23:26
So, run that workshop at least once a year. We also recommend you do it multiple times a year, even once a quarter, if possible, if you seriously are making AI a priority inside your company, because there will be new employees coming in and they need to be on board to those Guidelines, I need to be reminded they need to hear it three four five ten twenty thirty times Right for them to actually take this seriously and know exactly what to do
Bob
0:23:52
Whether it be the guidelines the process or just the fact that's important. So Run that workshop if you need help facilitating that workshop certainly reach out to IMPACT We do these workshops for marketing teams, for sales teams, for entire organizations. We're happy to customize it for you. But my last piece of advice here would be to get that workshop scheduled and start making this a priority inside your organization right away.
Alex
0:24:16
Couldn't agree with you more. Really great insights today, Bob. Thank you so much for your time. Before we go, for our listeners, for our viewers, how can they get in touch with you if they have follow-up questions, or they want to talk more about AI or just all things business, or they ask you to answer with you.
Bob
0:24:27
Yeah, connect with me on LinkedIn or go to impactplus.com. On our website, fill out the form and schedule some time to talk with either myself or a member of our team.
Alex
0:24:36
Bob, always a pleasure. For everybody listening and watching, thank you for checking out Endless Customers, and we'll see you on the next episode. and we'll see you on the next episode.
Transcribed with Cockatoo
About this Episode
AI technology comes with amazing capabilities — and some troubling downsides. But, says Bob Ruffolo, CEO and founder of IMPACT, many of these risks have been around long before AI.
For years, every digital technology has come with risks, and we’ve navigated them carefully and responsibly. But think about it, the concerns about AI are concerns we’ve had for decades:
- “It’s too easy to accidentally plagiarize.”
- “We might expose client data to the wrong people.”
- “Overly-manipulated media will make people question our brand.”
- “We need to be on the lookout for biases.”
You read these statements in 2024, and our mind goes immediately to AI. But these same concerns have dogged Photoshop, cloud-based computing, and content marketing for years.
All that said, there are legitimate safety concerns about the use of AI, and we need to take them seriously.
However, says Bob, you want to avoid a stringent top-down policy that restricts experimentation. Instead, it’s important to create a culture that supports safe and responsible innovation.
At IMPACT, we use what we call the SAFETY guidelines to govern our AI use:
- Secure - Know the data security levels of the tools you use and act accordingly.
- Assistive, not Autonomous - Human in the loop at all times. You are ultimately still accountable. Will this induce more trust, yes or no?
- Fact Checked - Review all output for accuracy.
- Experiment - Test AI for improved performance in all areas of your domain, for improving work quality, to increasing your capacity and output to reduce costs.
- Transparent - Be transparent in AI usage to maintain trust. Cite sources publicly.
- Your Expertise Matters - AI is a tool to enhance your existing creative insights and strategic thinking. You own your AI skill set and you own how you scale it for your own professional development and for the good of the organization.
But guidelines are only good if the whole team follows them. Bob reminds the audience that it doesn’t matter if the top brass understands risk analysis. You need your front-line workers to be acting responsibly as well.
The best way to get everyone on the same page, he says, is through a full-team workshop. That way, you have a shared vocabulary and crystal clear expectations.
Connect with Bob
Bob Ruffolo founded IMPACT in 2009. In that time, he’s grown the company from a small web design agency to a premier marketing and sales training organization. Today, IMPACT’s experts provide coaching and guidance to teams all over the world.
Learn more about Bob at his IMPACT bio page
Connect with Bob on LinkedIn
Keep Learning
Watch Ep. 7: AI for Businesses — 6 Steps All CEOs Should Take
Read The AI Best Practice Guidelines Your Company Needs
Learn more about IMPACT’s AI Culture workshop
__
Endless Customers is a podcast produced and distributed by IMPACT, a sales and marketing training organization.
We coach businesses to implement our They Ask, You Answer framework to build trust and fill their pipeline.
For inquiries about sponsorship opportunities or to be considered as a guest, email awinter@impactplus.com.
Want to tell us about a challenge you’re facing? Schedule a free coaching session with one of our experts.
Free Assessment: