Latest Articles

Safeguarding Your Data in the Age of AI: How Redactive Enhances Security for Microsoft Copilot Users

Safeguarding Your Data in the Age of AI: How Redactive Enhances Security for Microsoft Copilot Users

As AI tools like Microsoft Copilot revolutionize productivity, they also introduce new risks, from misconfigured permissions to potential data leaks. Redactive Security steps in to protect your sensitive information while ensuring compliance with regulations like GDPR and CPS230. Discover how Redactive helps organizations confidently integrate AI by detecting mismatched permissions, preventing internal data leaks, and securing your enterprise tools.

Why we raised $11.5 Million to turbo-charge the delivery of our enterprise-grade developer platform for Responsible GenAI adoption

Why we raised $11.5 Million to turbo-charge the delivery of our enterprise-grade developer platform for Responsible GenAI adoption

Over the last 8 months, Redactive has quickly grown to a 10 person team with multiple financial services institutions as paying customers trusting our developer platform to get their Generative AI use cases from pilot to production. Today, we are excited to announce our $7.5 million USD ( $11.5 Million AUD) seed round co-led by Felicis and Blackbird, alongside Atlassian Ventures and automation unicorn Zapier.

The missing piece in the AI application Stack - RAG Index Sharing

The missing piece in the AI application Stack - RAG Index Sharing

The evolution of AI from highly specific models to the broad-reaching capabilities of Large Language Models (LLMs) marks a pivotal shift. Imagine a landscape where data sharing unlocks AI applications that are specific to business use cases and secure.

Why hasn't AI taken over the Finance Industry?

Why hasn't AI taken over the Finance Industry?

AI has already proven to be a revolutionary force around the world. From deciphering dead languages to finding previously unknown patterns in historical data, it is redefining our understanding of our past.

4 things product leaders shipping AI capabilities need to be aware of

4 things product leaders shipping AI capabilities need to be aware of

There are a number of new requirements & technologies that PMs need to factor into their roadmap in order to build an AI app, outside the norm of ‘traditional' SaaS application development.

Looking to use LLMs to unlock productivity in your company? Managing data permissions will accelerate value

Looking to use LLMs to unlock productivity in your company? Managing data permissions will accelerate value

The core value of a business is in its intellectual property that is secured behind data permissions in a variety of tools (Confluence, Sharepoint, Gdrive, salesforce all have different permission models). While generalisable LLMs are extremely powerful, context about how a business operates, their processes and their customers are vital to solve problems.

The challenges with building an AI product with transparency at its core

The challenges with building an AI product with transparency at its core

One of the fundamental challenges is a lack of user trust in AI. This skepticism isn't baseless but rooted in the perceived opacity of AI operations. To overcome this, adopting transparency in the development and operation of AI products is fundamental.

Computer, Enhance: How to Think About LLMs Effectively

Computer, Enhance: How to Think About LLMs Effectively

The size of the problem-space addressed by LLMs may seem dizzying at first glance; but with some deeper reflection, there's a clear way to think about them that's both simple and powerful. Having a mental framework in place to organise your approach to building next-gen AI apps will provide your designs with improved clarity, give you a coherent methodology for iterating on them, and reduce the time to shipping your product. This mental framework all comes down to the idea of fidelity.

You don't want a Vector Database

You don't want a Vector Database

All of this interest in Vector Databases is driven by the explosion of Generative AI that ChatGPT sparked (less than 18 months ago!). Specifically, that Vector Databases enable the Retrieval Augmented Generation (RAG) paradigm, which is currently the leading form of prompt engineering and actually extracting business value from Large Language Models (LLMs).