Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
JOHNWICK
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Building an Enterprise Strategy for AI
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
[[file:Building_an_Enterprise_Strategy_for_AI.jpg|500px]] AI is a Hydra-like beast. It can sink its tentacles into so many different crevices of an organization that managing it can seem hopeless — even deciding where to start a challenge. One might resort to the Herculean approach based on sword and fire — cutting off every use of AI and terminating its growth until the problem looks manageable, but you may kill the organization’s curiosity and interest at a moment when it should be flowering. Nor is strict central control likely to be productive. When so little is known, trying to create too much order or hew too closely to a plan is a sure and certain road to failure. We have a strong bias toward a certain kind of strategic thinking that provides clarity about exactly what we want to achieve. In our descriptions of great strategists, they are able to identify what matters most and direct an organization’s efforts toward that goal. That’s a kind of strategic thinking, but it isn’t the only kind, and it implies a level of knowledge and certainty that aren’t always (or even often) present. In situations of uncertainty — common enough, though rarely as deeply present as in AI — a strategy needs to focus on creating the conditions where a clear set of strategic decisions can become possible. Leaders are necessarily post-hoc masters of pretending to a certainty they never really had, but if they are good leaders they don’t confuse the appearance of certainty with the reality. A good leader must understand whether they have enough knowledge to have clarity. If not, the most strategic thinking you can do is around what you need to learn. If you’re thinking about AI in your organization, that’s a much better mindset to have. AI is too new, too fragmented, evolving too rapidly, and has too many implications for work and culture for anyone to be more than haltingly doubtful of how to use it effectively. If you’re building an enterprise AI strategy, figuring out how you can learn what you need to know ought to be a priority. That’s why learning is front-and-center in the AI strategy I’ve been building. Learning where AI is in the organization. Learning its risks. Learning where it can make a positive difference. Learning what tools and vendors work well for us. Learning how to make people knowledgeable. And — most of all — learning how to use AI well for us and our clients. The first element of that learning strategy is to get a grip on where the organization stands. Who’s using AI, how it’s being used, what tools we have, and whether there are any risks in whatever is going on. Along with that, I want to figure out where we think the biggest opportunities are. If we’re going to invest in AI, it’s worth being thoughtful about it and targeting places in the organization where it might make a difference. These two pieces (current-state and opportunity) go hand in hand, and the goal is to tackle them both in a single process. This kind of assessment is the sort of thing we do all the time for clients when we provide strategic consulting — whether that’s around data, analytics, or technology. Yet it would be absurd to hope for too much from this part of the strategy. There are plenty of cases where stakeholders inside the organization (and rarely consultants outside the organization) know enough to confidently build a full strategy based on their current knowledge. With AI, that isn’t the case. Certainly not within our organization and not outside it either. It’s easy to mistake this building of a current state and opportunities — which involves building a roadmap –for having a strategy. It’s not. Yes, part of the AI strategy we’re pursuing is to create this kind of assessment and roadmap. But an organization can be quite strategic without such a step and may well be very unstrategic even with it. There are many aspects to a business (certainly ours, which has involved both organic growth and a number of substantial acquisitions) which are never given this kind of formalized treatment, and they do not necessarily suffer from that. This may seem paradoxical, but having a paper plan is not the same thing as being strategic. Many paper plans are lacking in strategic sense, and there is a formidable gap between having a strategy on paper and having a strategy that’s driving tactics and decisions. Many of the most strategic organizations flourish because the strategy is straightforward, compelling and widely understood throughout the organization. They don’t need a strategy laid out in PowerPoint. Unfortunately, when it comes to AI in our organization, we don’t have a straightforward, compelling and widely understood direction. So, a more formal and reflective effort seems worthwhile. The third element of the learning strategy is to cherry-pick a small number of use-cases we think are suitable for AI and push on them. Hands-on experience should shape strategic thinking. From my time in the Big-4, I’ve seen countless paper strategies divorced from the facts on the ground — that can be particularly problematic in something like AI where the facts on the ground are very slippery. The message here is simple — to be good at AI you have to use AI, and that means trying things even with a high risk of failure. The truth is, we’re not at all sure where the best use-cases for AI will turn out to be. That means leaning into a diverse set of initiatives that range from product to internal workflows to consulting support. That diversity is a conscious choice, like building a balanced portfolio. It will ensure that we build AI expertise across multiple domains and it will help us explore which domains seem most amenable to AI improvement. We’ve also tried to create a portfolio of initiatives with very different levels of difficulty. Quick wins matter because learning is hard and it takes time. Organizations, like people, need positive feedback if they’re going to stick with something long enough to become good at it. An example of a quick win/positive feedback project is doing project turnover with AI-assisted documentation using tools like NotebookLM. It’s a layup. There are no deep technology challenges and very few barriers. It’s also nice because it lives at the intersection of our internal work and our client consulting work. Providing clients with AI-based documentation feels cutting edge but isn’t hard at all. Some of our AI efforts are almost entirely directed to learning. For example, we’re evaluating the impact of AI on our offshoring strategy. Like most technology consultants, we do a lot of offshore. But the massive strides in AI-coding cast doubt on the traditional offshore model. As a programmer, I’ve watched the increasing capability of code-generation systems with a mixture of fascination and dread. I’m not yet convinced that vibe coding is a productive strategy for commercial software, but is an onshore AI-enabled programmer now more efficient than offshore? You might think that enabling offshore with AI would have similar benefits to enabling an on-shore resource with AI and leave the fundamental economics unchanged, but that’s not true. The biggest challenges with offshore delivery are communications and management and right now it’s not likely that AI does much to facilitate either. We’ve also chosen some riskier AI technology projects built around work we’ve struggled with in client-delivery. A good example here is an effort to incorporate real-time AI-based highlights into our main technology product. This has allowed us to build out and test a real-time pipeline strategy for AI — something we’ve struggled with. Real-time AI is hard, yet in our vertical it’s almost always necessary. Learning how to do it well without our own technology solutions gives us a low-pressure way to build expertise and create delivery paradigms that work so that we have a decent chance of success when it comes time to build client deliverables. It’s precisely because we know it’s likely to be hard and maybe not very good that we’re trying it. Trying different use-cases, different types of solution, and tackling problems that range from easy to quite hard. That’s how you learn. There’s a final element to this learning strategy which is largely implicit. People often describe the current state of AI adoption in the enterprise as the “Wild West” — devoid of proper control and oversight. It’s true. But the implicit negative associated with that description isn’t necessarily right. Sometimes an organization benefits from turning people loose and seeing where and how pockets of excellence emerge. Giving people the room to learn and drive adoption on their own can be far more effective than attempting to control everything. There’s an undeniable case for control — and some control is probably essential. That’s part of what the assessment is meant to cover. But when you aren’t confident about what you know, ensuring that you create space for that knowledge to flower is an important and underappreciated part of a strategy. Some basic controls are likely necessary but too much and you’ll stifle the temptation to explore. In an enterprise, negative freedom isn’t always quite enough. You may need to give people the tools necessary for exploration or the financial flexibility to get those tools. We ended up buying a pool of licenses and then creating a simple request process for people to get access to AI tools. This gives us some control over the basic setup but makes it easy for de-centralized applications to grow. We’re also not putting any real restrictions on our business unit’s ability to go their own way with tools. A lot of our account teams work in client cloud environments that — these days — include very rich AI toolkits that are often specific to the platform. Delivering projects in those environments requires a willingness and an ability to use those cloud-provider specific tools. It’s easy to confuse central control with good strategy and a lack of control with a clear lack of strategy. That’s just not right. The degree to which an organization centralizes control is a strategic decision in and of itself — a decision influenced by the tradeoffs in distributed exploration vs. the risks from lack of control or focus. With the deep uncertainties we face around what types of AI initiative will turn out to be productive, we’ve consciously chosen a strategy that involves a certain level of benign neglect. AI is too important to ignore, but it challenges many traditional organizational practices that are based on a certain level of internal or hirable expertise. What do you do when you don’t know enough to create a strategy? You create a strategy for learning. In our case, that means taking the time and making the effort to formally assess where we are and think about AI opportunities. It means getting hands-on with a whole portfolio of AI efforts and making sure that those efforts force us to tackle different applications, use-cases, and problem sets. It means making sure the organization gets some easy wins and positive feedback but also tackling things that we know are hard and may well fail. Finally, it means understanding that effective learning in the organization requires a nice mix of negative and positive freedom — enough space for people to explore and enough resources that they can make genuine progress. Read the full article here: https://ai.gopubby.com/building-an-enterprise-strategy-for-ai-af5713edc8d7
Summary:
Please note that all contributions to JOHNWICK may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
JOHNWICK:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Building an Enterprise Strategy for AI
Add topic