4 things product leaders shipping AI capabilities need to be aware of

4 things product leaders shipping AI capabilities need to be aware of

1. Getting your AI application to market is more than just connecting a ‘magical' LLM. Understanding the new AI technology stack with an example:

Starting simple, let’s say you wanted to build a chatbot for your company (internal team) which connects to your knowledge base (eg: Confluence, Notion or Google Drive) and communication tool (eg: Slack, Microsoft Teams or Email).

In this case, your engineering team needs to plan an approach to:

a. Data connectors:
Build data connectors into any potential tools that your customers may have. And given you’re ideally servicing a large addressable market, not every customer will have the same knowledge base or communication tool. So you will need to support a wide number in order to attain meaningful coverage of your customer’s potential toolchain choice.

b. Chunking:
Determine a chunking strategy that ‘breaks down' confluence pages along with conversations & threads in slack. This allows only the relevant ‘chunk’ of information is passed along to an LLM when prompted, providing it context on your business & ensuring the accuracy of the answer returned. This sounds easy enough at first, but how do you consistently ensure that you’re only pulling the ‘right’ part of a conversation thread, or the correct paragraph & image from a Confluence document? As the knowledge base is updated or changed, you need to consistently update your chunking implementation to retain a high quality bar on information retrieval.

c. Embedding Models:
Choose, optimise and maintain embedding models that allow an application (or a Retrieval Augmented Generation (RAG) data pipeline) to consistency pull relevant business data. Embedding models vectorise data fed to it, and applications query a database holding these vectors to identify how related data is to another piece of data. This is described as the ‘relatedness' or mathematically, 'how close a vector is to one another’ when queried.

d. Vector Databases
Whilst your embedding model produces vectors based on data you feed it, you need to find somewhere to store your vectors so that LLMs have insight into the full set of data that is accessible to it. Often, these databases store chunks alongside the embeddings so that content can be quickly retrieved when the LLM finds relevant data. Unfortunately, business data (such as confluence documents) change so often that pulling a stored chunk leads to out of date data being pulled in as context.

e. Large Language Models (LLMs)
Once you’ve set up everything above, you can finally choose an LLM that you believe best to power your use case. In many cases this is either hosted by your customer which you plug into (an increasingly considered option by enterprises) , hosted by yourself or you leverage a vendor such as OpenAI, Anthropic, AWS, GCP (to name a few).

f. Prompt Engineering
Now you can finally do what you set out to do. Iterate on the quality of your prompts in order to refine the answers given by LLMs. In our example, if you were building a customer service chatbot, you might find that if you inject ‘As a customer service representative’, followed by your user’s own written prompt, you will increase the quality of the response generated by an LLM. This is by far the highest value part of application development for your engineers to focus on to personalise the application’s experience for your users.


2. New Security requirements product managers need to factor into AI product delivery:

a. Always respect Access control (a.k.a: Permission management of your tools): 

When building any AI application that pulls data from your customer’s databases or tools, it’s critical to ensure that you only pull data which your end user has access to. This means that in addition to setting up data source connectors through to vector databases as mentioned above, you need to ensure that the user can only pull data that they have access to. This is an additional complexity that developers need to navigate. Unfortunately, LLMs don’t understand permissions, so it is up to you to ensure the context you give them is properly scoped. Additionally, as the context you give them comes from the Vector Database, so you need to make sure your Vector Database has been populated with access control lists from each data source.

For example, if you’re building a HR AI application, it's critically important that the Amy the HR leader can ask almost any question about Salary or employee performance. However, Jane from the Product team should really only be able to ask questions about herself (and not other employees). Here the internal data leakage problem really rears its head when building AI applications and is a key non-functional requirement for product and engineering leaders to devise meaningful approaches to. 

b. Data Loss Prevention (DLP)

Data Loss Prevention is the act of removing personal identifiable, sensitive or financial information from a set of data in order to protect the privacy of the individual or organisation from being shared across many systems. Given AI applications often require services to operate that are spread across multiple virtual private cloud environments and vendors (especially if you are using an external vendors like OpenAI for your LLM), DLP is a critical non-functional requirement. Enterprise customers will often mandate DLP in order to ensure that their customers private data is not exposed to other organisations (as they may mishandle or have different security standards to the original holder of the data). 

DLP is even more critical for organisations conscious of data residency such financial institutions if they leverage overseas services. In this case, egress of customer data to other legal jurisdictions can be seen as an offence and at worst, a law enforcement risk.

Understanding the importance of data loss prevention, it's critical to factor this into your data pipeline and ensure the customer of your AI application can appropriately redact information before and after transacting with LLMs, along with when end user data crosses vendors or geographic regions.


3. Do you have the data engineering skillset on your team to build (and the capacity time to maintain) an AI application or feature-set? 


The skillset required to implement the capabilities and infrastructure described above is a unique combination of data engineering, security and full-stack developers working together. Not only does this require you to design new teams to build AI features, it also requires a long term commitment to optimising your data connectors, chunking, and embedding models and permissions strategy to consistently ensure the operation of your use case. Make sure you focus on engineer training and factor in training & hiring to give your team time to understand these new technologies when ramping up your AI efforts. Additionally, ensure you have the budget to hire data engineering skillsets into your team to get ahead of chunking and embedding optimisation strategies, 

4. You could otherwise not stress about anything in this article and just use Redactive (Elevator pitch incoming!) 

We think everything mentioned above is complicated, costly, increases the tech-debt on your team whilst being a fundamental blocker to getting your existing software team focused on building a killer customer experience that leverages AI. 

Redactive was built to retrieve permissioned data from its source, so that only the right information is served to the right user that has the right to see it. We live in the intersection of data engineering, application developer enablement and security. Redactive manages data connectors, chunking, embedding, vector stores and permissions management and continuously optimising each piece. Our service enables your engineering team to focus on prompt engineering and getting your application to market quickly to serve your customers with a uniquely secure solution architecture your customers (or own organisations) security team will love.  

Enable AI for your organisation, responsibly.

Empower your organisation