How we use AI — and how we don't
We use AI to help people make better career decisions — evaluating job offers, understanding contracts, benchmarking salaries, and recovering from redundancy. These are high-stakes, personal moments. We hold ourselves to a higher standard.
What we commit to
Grounded, not generated
We don't let AI invent facts. When we name an employer as hiring, cite a salary range, or flag a trend, it's grounded in real data — market surveys, live search results, or information you've provided. Where we don't have enough data, we say so.
No demographic bias
Career advice should be shaped by what you do, what you earn, and what you want — not who you are. Our AI rules prohibit advice influenced by age, gender, ethnicity, disability, or any protected characteristic. If you're underpaid, we tell you — we don't normalise it.
Honest about limitations
AI isn't infallible. Every output carries a confidence indicator so you know when we're working from strong data versus a reasonable inference. We verify claims against real-time search results and label what's verified, what's likely, and what's estimated.
Not a replacement for experts
We provide career intelligence — observations, benchmarks, and analysis. Not legal, financial, or medical advice. When our analysis touches legal rights or wellbeing, we explain the general position and recommend speaking to a professional.
Your data stays yours
Your data is encrypted in transit and at rest. We never sell it or share it with advertisers. AI processing uses Anthropic's Claude API — your data isn't used to train their models. You can export everything or delete your account at any time.
Fair pricing, no dark patterns
One price. Everything included. No locked tiers, no upsells hidden behind paywalls, no premium insights that should have been in the base product. Recovery support is available separately because not everyone needs it — but it's never pushed on you.
How we enforce it
These aren't aspirational statements — they're enforced in the codebase.
Regulatory alignment
We build with UK GDPR, the Data Protection Act 2018, ICO guidance on AI, and the Equality Act 2010 in mind. Our AI features would likely fall under limited-risk classification in the EU AI Act — we apply transparency obligations voluntarily.
We're a small company and we don't have a legal team. But we take this seriously, we build the safeguards into the product, and we'll always be transparent about where we are and where we're improving.