Auditing Outline. Firsthand lessons from comparing manual testing and AI security platforms
03 Feb 2026 - Posted by Luca CarettoniIn July 2025, we performed a brief audit of Outline - an OSS wiki similar in many ways to Notion. This activity was meant to evaluate the overall posture of the application, and involved two researchers for a total of 60 person-days. In parallel, we thought it would be a valuable firsthand experience to use three AI security platforms to perform an audit on the very same codebase. Given that all issues are now fixed, we believe it would be interesting to provide an overview of our effort and a few interesting findings and considerations.

Disclaimer: Outline
While this activity was not sufficient to evaluate the entirety of the Outline codebase, we believe we have a good understanding of its quality and resilience. The security posture of the APIs was found to be above industry best practices. Despite our findings, we were pleased to witness a well-thought-out use of security practices and hardening, especially given the numerous functionalities and integrations available.
It is important to note that Doyensec audited only Outline OSS (v0.85.1). On-premise enterprise and cloud functionalities were considered out of scope for this engagement. For instance, multi-tenancy is not supported in the OSS on-prem release, hence authorization testing did not consider cross-tenant privilege escalations. Finally, testing focused on Outline code only, leaving all dependencies out of scope. Ironically, several of the bugs discovered were actually caused by external libraries.
Disclaimer: AI platforms evaluated during this dry run
Large Language Models and AI security platforms are evolving at an exceptionally rapid pace. The observations, assessments, and experiences shared in this post reflect our hands-on exposure at a specific point in time and within a particular technical context. As models, tooling, and defensive capabilities continue to mature, some details discussed here may change or become irrelevant.
Instrumentation
When performing an in-depth engagement, it is ideal to set up a testing environment with debugging capabilities for both frontend and backend. Outline’s extensive documentation makes this process easy.
We started by setting up a local environment as documented in this guide, and executing the following commands:
echo "127.0.0.1 local.outline.dev" | sudo tee -a /etc/hosts
mkdir files
The following .env file was used for the configuration(non-empty settings only):
NODE_ENV=development
URL=https://local.outline.dev:3000
PORT=3000
SECRET_KEY=09732bbde65d4...989
UTILS_SECRET=af7b3d5a6cc...2f1
DEFAULT_LANGUAGE=en_US
DATABASE_URL=postgres://user:pass@127.0.0.1:5432/outline
REDIS_URL=redis://127.0.0.1:6379
FILE_STORAGE=local
FILE_STORAGE_LOCAL_ROOT_DIR=./files/
FILE_STORAGE_UPLOAD_MAX_SIZE=262144000
FORCE_HTTPS=true
OIDC_CLIENT_ID=web
OIDC_CLIENT_SECRET=secret
OIDC_AUTH_URI=http://127.0.0.1:9998/auth
OIDC_TOKEN_URI=http://127.0.0.1:9998/oauth/token
OIDC_USERINFO_URI=http://127.0.0.1:9998/userinfo
OIDC_DISABLE_REDIRECT=true
OIDC_USERNAME_CLAIM=preferred_username
OIDC_DISPLAY_NAME=OpenID Connect
OIDC_SCOPES=openid profile email
RATE_LIMITER_ENABLED=true
# ––––––––––––– DEBUGGING ––––––––––––
ENABLE_UPDATES=false
DEBUG=http
LOG_LEVEL=debug
Zitadel’s OIDC server was used for authentication
REDIRECT_URI=https://local.outline.dev:3000/auth/oidc.callback USERS_FILE=./users.json go run github.com/zitadel/oidc/v3/example/server
Finally, VS Code debugging was set up using the following .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Attach to Outline Backend",
"address": "localhost",
"port": 9229,
"restart": true,
"protocol": "inspector",
"skipFiles": ["<node_internals>/**"],
"cwd": "${workspaceFolder}"
}
]
}
We also facilitated front-end debugging by adding the following setting at the top of the .babelrc file in order to have source maps.
"sourceMaps": true
Findings
Doyensec researchers discovered and reported seven (7) unique vulnerabilities affecting Outline OSS.
| ID | Title | Class | Severity | Discoverer |
|---|---|---|---|---|
| OUT-Q325-01 | Multiple Blind SSRF | SSRF | Medium | 🤖🙍♂️ |
| OUT-Q325-02 | Vite Path Traversal | Injection Flaws | Low | 🙍♂️ |
| OUT-Q325-03 | CSRF via Sibling Domains | CSRF | Medium | 🙍♂️ |
| OUT-Q325-04 | Local File Storage CSP Bypass | Insecure Design | Low | 🙍♂️ |
| OUT-Q325-05 | Insecure Comparison in VerificationCode | Insufficient Cryptography | Low | 🤖🙍♂️ |
| OUT-Q325-06 | ContentType Bypass | Insecure Design | Medium | 🙍♂️ |
| OUT-Q325-07 | Event Access | IDOR | Low | 🤖 |
Among the bugs we discovered, there are a few that require special mention:
OUT-Q325-01 (GHSA-jfhx-7phw-9gq3) is a standard Server-Side Request Forgery bug allowing redirects, but having limited protocols support. Interestingly, this issue affects the self-hosted version only as the cloud release is protected using request-filtering-agent. While giving a quick look at this dependency, we realized that versions 1.x.x and earlier contained a vulnerability (GHSA-pw25-c82r-75mm) where HTTPS requests to 127.0.0.1 bypass IP address filtering, while HTTP requests are correctly blocked. While newer versions of the library were already out, Outline was still using an old release, since no GitHub (or other) advisories were ever created for this issue. Whether intentionally or accidentally, this issue was silently fixed for many years.
OUT-Q325-02 (GHSA-pp7p-q8fx-2968) turned out to be a bug in the vite-plugin-static-copy npm module. Luckily, it only affects Outline in development mode.
OUT-Q325-04 (GHSA-gcj7-c9jv-fhgf) was already exploited in this type confusion attack. In fact, browsers like Chrome and Firefox do not block script execution even if the script is served with Content-Disposition: attachment as long as the content type is a valid application/javascript. Please note that this issue does not affect the cloud-hosted version given it’s not using the local file storage engine altogether.
Investigating this issue led to the discovery of OUT-Q325-06, an even more interesting issue.
Outline allows inline content for specific (safe) types of files as defined in server/storage/files/BaseStorage.ts
/**
* Returns the content disposition for a given content type.
*
* @param contentType The content type
* @returns The content disposition
*/
public getContentDisposition(contentType?: string) {
if (!contentType) {
return "attachment";
}
if (
FileHelper.isAudio(contentType) ||
FileHelper.isVideo(contentType) ||
this.safeInlineContentTypes.includes(contentType)
) {
return "inline";
}
return "attachment";
}
Despite this logic, the actual content type of the response was getting overridden. All Outline versions before v0.84.0 (May 2025) were actually vulnerable to Cross-Site Scripting because of this issue, and it was accidentally mitigated by adding the following CSP directive:
ctx.set("Content-Security-Policy","sandbox");
When analyzing the root cause, it turned out to be an undocumented insecure behavior of KoaJS.
In Outline, the issue was caused by forcing the expected “Content-Type” before the use of response.attachment([filename], [options]) .
ctx.set("Content-Type", contentType);
ctx.attachment(fileName, {
type: forceDownload
? "attachment"
: FileStorage.getContentDisposition(contentType), // this applies the safe allowed-list
});
In fact, the attachment function performs an unexpected:
set type (type) {
type = getType(type)
if (type) {
this.set('Content-Type', type)
} else {
this.remove('Content-Type')
}
},
This insecure behavior is neither documented nor warned against by the framework. Inverting ctx.set and ctx.attachment is sufficient to fix the issue.
Combining OUT-Q325-03, OUT-Q325-06 and Outline’s sharing capabilities, it is possible to take over an admin account, as shown in the following video, affecting the latest version of Outline at the time of testing:
Finally, OUT-Q325-07 (GHSA-h9mv-vg9r-8c7c) was discovered autonomously by a security AI platform. The events.list API endpoint contains an IDOR vulnerability allowing users to view events for any actor or document within their team without proper authorization.
router.post(
"events.list",
auth(),
pagination(),
validate(T.EventsListSchema),
async (ctx: APIContext<T.EventsListReq>) => {
const { user } = ctx.state.auth;
const {
name,
events,
auditLog,
actorId,
documentId,
collectionId,
sort,
direction,
} = ctx.input.body;
let where: WhereOptions<Event> = {
teamId: user.teamId,
};
if (auditLog) {
authorize(user, "audit", user.team);
where.name = events
? intersection(EventHelper.AUDIT_EVENTS, events)
: EventHelper.AUDIT_EVENTS;
} else {
where.name = events
? intersection(EventHelper.ACTIVITY_EVENTS, events)
: EventHelper.ACTIVITY_EVENTS;
}
if (name && (where.name as string[]).includes(name)) {
where.name = name;
}
if (actorId) {
where = { ...where, actorId };
}
if (documentId) {
where = { ...where, documentId };
}
if (collectionId) {
where = { ...where, collectionId };
const collection = await Collection.findByPk(collectionId, {
userId: user.id,
});
authorize(user, "read", collection);
} else {
const collectionIds = await user.collectionIds({
paranoid: false,
});
where = {
...where,
[Op.or]: [
{
collectionId: collectionIds,
},
{
collectionId: {
[Op.is]: null,
},
},
],
};
}
const loadedEvents = await Event.findAll({
where,
order: [[sort, direction]],
include: [
{
model: User,
as: "actor",
paranoid: false,
},
],
offset: ctx.state.pagination.offset,
limit: ctx.state.pagination.limit,
});
ctx.body = {
pagination: ctx.state.pagination,
data: await Promise.all(
loadedEvents.map((event) => presentEvent(event, auditLog))
),
};
}
);
While the code implements team-level isolation (via the teamId check) and collection-level authorization, it fails to validate access to individual events. An attacker can manipulate the actorId or documentId parameters to view events they shouldn’t have access to. This is particularly concerning since audit log events might contain sensitive information (e.g., document titles). This is a nice catch, something that is not immediately evident to a human auditor without an extended understanding of Outline’s authorization model.
On the Use of AI tools
Despite the discovery of OUT-Q325-07, our experience using three AI security platforms was, overall, rather disappointing. LLM-based models can identify some vulnerabilities; however, the rate of false positives vastly outweighed the few true positives. What made this especially problematic was how convincing the findings were: the descriptions of the alleged issues were often extremely accurate and well-articulated, making it surprisingly hard to confidently dismiss them as false positives. As a result, cleaning up and validating all AI-reported issues turned into a 40-hour effort.
Such overhead during a paid manual audit is hard to justify for us and, more importantly, for our clients. AI hallucinations repeatedly sent us down unexpected rabbit holes, at times making seasoned consultants, with decades of combined experience, feel like complete newbies. While attempting to validate alleged bugs reported by AI, we found ourselves second-guessing our own judgment, losing valuable time that could have been spent on higher-impact tasks.
While the future undoubtedly involves LLMs, it is not quite here yet for high-quality security engagements targeting popular, well-audited software. At Doyensec, we will continue to explore and experiment with AI-assisted tooling, adopting it when and where it actually adds value. We don’t want to be remembered as anti-AI hypers but we’re equally not interested in outsourcing our expertise to confident-sounding hallucinations. For now, human intuition, experience, and skepticism - combined with top-notch tooling - remain very hard to beat. Challenge us!
