How to choose, think about and what to do with your first company Copilot

Dawid Naude, Director, Pathfindr

If your company is using a collaboration stack like Microsoft Teams or Slack, you’ve likely delved into looking at a chatbot in the past, and if you launched one it probably had some limited success, or a lot of success in limited areas, and now with the big news of Copilot for Microsoft 365 being available to any company by removing the 300 seat minimum, you’re wondering if you should build a new one, add to the existing or even revisit how you use them entirely.

The dream is that everybody has their own ‘Copilot’, a tireless administrative assistant who never asks for a raise nor takes a sick day. They can help you find, understand, create and act. They’re the world’s greatest assistant, and your most patient advisor.

The term ‘Copilot’ is used across many Microsoft Bot-like products but behaves in very different ways and costs very different amounts. So before we start let’s define the build blocks.

Bots, like people, have two types of usefulness - knowledge and skills. Things it knows, and work it can do.

Types of knowledge

  1. Curated Content: Deliberately chosen, ’trusted’ and official content. HR Policies, branding guideline backs, FAQs, troubleshooting articles.
  2. Community Content: Created by anyone, accessible by everyone. A folder that has all the sales presentations, a confluence site with all the platform documentation, case studies and reusable material from multiple projects.
  3. Connected Content: Data from other systems, or special data from a specific platform, like the Microsoft stack. Transcripts of your MS Teams meetings, access to all Cases in Salesforce, your emails, SAP records, and the underlying codebase of software you’ve developed.
  4. Common Content: All the data that the large language model was grounded on. Such as all the books out-of-copyright, billions of news articles, Wikipedia, developer documentation, blog posts, essays and websites. The content that initially (and still) made society in awe of ChatGPT being able to answer questions like "explain quantum mechanics to me like I’m an 8-year-old”.

Types of skills

General Skills: Mostly text-based, models typically can perform tasks by transforming and understanding text. Like being able to summarise, generate and analyse things. Given its text-based nature, and that computer code is text, it can also understand and create code, and by extension - CSV’s and other formats. A general skill for ChatGPT is also being able to search the web for you.

Specific Skills: Deliberate capability for your organisation like being able to create a sales pitch in your approved company-branded MS PowerPoint style (a combination of workflow, permissions, files, API and images). Booking meetings for you. Crawling the web to create a market intelligence briefing that is combined with your internal insight. Creating architecture diagrams based on an underlying code base.

Combining Connected Content with Specific Skills

In March last year, Microsoft shared videos showing creating PPTs with just a simple prompt, or summarising an email, finding an attachment that a client sent to you by simply saying “Can you find the xls that Nate shared last week” or asking for a summary of the actions after a recorded meeting. Most impressively it showed how it combined different sources to create something new, “can you create PowerPoint from the outline we discussed in the meeting yesterday”.

All of that is an example of Connected Content and Common Content. The Connected Content here is mostly from the Microsoft Graph API, a rich dataset that collects what you're doing, who you’re meeting, and the relationships between everything on the platform.

So a question like “find the file Nate shared with me last week” will look at the Microsoft Graph, see that I’ve been interacting with Nate Buchanan, look through MS Teams chats I’ve had with him, emails and meetings, and surface the file. The same can be done with Shopify sales data, Salesforce Cases, SAP Orders and so much more.

So typically when a chatbot comes released with a platform, it’s tying in that Connected Content that is part of their secret sauce, and Specific Skills on that platform.

The Microsoft stack is particularly confusing so I’ve created my take on it below, of course, make sure you visit the official site for validation.

How to think about the Microsoft Copilot category


Many Copilots and one master Personal Copilot

The current trend is that every platform will have a Copilot-type experience, which makes sense. We use lots of systems, sometimes very infrequently but required, and not needing to remember exactly how to navigate through Workday or SuccessFactors the few times a year you need to log a leave request is helpful, especially for new employees. So using an in-built Copilot for those tools would mean going to the tool, but using a chat when you’re in it to ask “can I log leave?”.

What’s even more helpful though is to start investing in and deliberately building your ‘Personal Copilot’. The one Copilot that will be by your side no matter what task you’re doing, and in the place you’re most familiar with. By adding enough curated content, specific skills and connected content (access to other systems) to this Personal Copilot it could also eventually replace the need for staff to access your infrequent systems at all.

If you had a human assistant right now, where would you give them instructions throughout the day? It’s most likely pinging them in Microsoft Teams or Slack, so that’s a great place to put your Personal Copilot.

And just like a human, at the start they will have some capability and ‘general knowledge’ and skills, but over time they will improve and become more capable. But… unlike a human, you will have to ‘teach’ it the new skills very deliberately, it won’t go and learn how to automatically create a report it’s noticed you do manually every day. Essentially it won’t learn from you like a person would, that’s the next frontier and poses questions I’ll leave to the philosophers.

It’s also a great way for leaders to get a pulse on their organisation

It’s traditionally been very difficult to really understand what your people do, what they need, and how you can help them. It would require workshops, interviews, time and motion studies, and expensive consultants. But now we have the opportunity to just see the actual questions that staff are asking - “can you write a grant application for me?”, “can you help me troubleshoot updating a customer order?”, “what’s our stress leave policy?”. We can see themes, spikes, common problems and then solve them. The unknown would become known.

If you’re concerned about privacy, Big Brother, etc - there are approaches to building trust into the system whilst still acknowledging that ultimately a user has asked a question, and there will be a fingerprint of that individual, similar to the price we pay now - we can have capability but not without the cost of foregoing some privacy, but you can make it work so that employee questions aren’t matched to the employee itself, but if a company really wanted to they could get that information.

Roll-out needs to be deliberate or else adoption will be lumpy, improvement needs to be continual or it’ll stop being valuable.

To wrap this up let’s ground this in some reality.

It sounds like every employee’s dream tool. You don’t need to learn how to use it, you just tell it what you want and it’ll do it, seems like a no-brainer. Unfortunately we’ve found that adoption without a deliberate rollout strategy is still a typical bell curve, meaning most will use it a little bit, some will use it a lot, and there’s a group that will never use it. Also, your business changes all the time, even if your core service stays the same, the systems, tools, processes, and people all change, and you’ll need to change your Copilot continually too.

Just like the concept that if you’re not improving you’re atrophying, the same is true for AI support, especially Copilots. Having a capability stay the same whilst the underlying context is continually changing will make it useless eventually.

Also, we know that first impressions count, and if your user’s first experience is asking it to do something that it’s not particularly good at, it’ll try to do it anyway and possibly deliver a shockingly bad output instead of saying it’s not within its capability. This experience will ruin adoption if users aren’t shown how to get the best out of the tools. Anyone who’s been on the early trials of using Copilot in PowerPoint can relate to the experience of asking it to do simple things and seeing bizarre results.

What we do now will be looked at as ‘data entry’ within a few years.

I must admit, even for me, an AI founder, I still naturally gravitate towards a Google search instead of asking ChatGPT to do it for me. This won’t be the behaviour of our young kids and those in University right now. It’ll just be ‘the way’. In some ways we need to teach ourselves the working style of the next generation, I imagine the shift from abacus to calculator, from spreadsheet paper to spreadsheet file went through the same approach.

In 2025 our graduate cohort will look at how we use systems today with a grimace and consider it ‘data entry’ - what most white-collar workers consider the lowest rung of work. I don’t imagine they’ll stay for long and will move to companies that have AI doing work that AI should be doing, not a highly educated graduate doing what an AI should be doing.

A final point - this technology has never been easier to grab, get started and get going. You could spend the next 3 months on an AI Strategy (done by people in air-conditioned corporate offices without any interaction with the people who will actually benefit from the AI), or you could spend it on running targeted experiments, collecting feedback, improving, and doing it again.

Essentially, your next 3 months should be spent testing AI in your call centre/field service team/factory instead of in a meeting room debating the formatting of a PowerPoint.

Other Blogs from Dawid


5 No Brainers

For all the hype around AI, it’s often not clear to business where they should get started. Some convince themselves they have a plan, you’re in this category if you say “We’ve given some people access to MS Copilot”. Let me guess - they seem to like the MS Teams meeting summarisation right?

AI Poor Data

I don’t know if it’s simply a convenient answer whilst there are hundreds of other competing priorities, or if it’s a misinformed opinion, but “our data isn’t ready yet to take advantage of AI” is something we hear regularly. Or similar variants like needing to spend 2024 getting data ready and then they’ll reassess AI in 2025.