Switch AI from OpenAI to Gemini 2.0 Flash (free, key exists)

All AI features now use Gemini 2.0 Flash via the existing API key.
Falls back to OpenAI if OPENAI_API_KEY is set instead.
Falls back to heuristics if neither key exists.

Gemini free tier: 15 RPM, 1M tokens/day, 1500 RPD
At PNPL's scale this is effectively unlimited and costs £0.

Changed:
- src/lib/ai.ts: chat() → tries Gemini first, OpenAI fallback
- src/app/api/automations/ai/route.ts: same dual-provider pattern
- docker-compose.yml: GEMINI_API_KEY added to app environment

All 11 AI features now work:
- Smart amount suggestions, message generation, fuzzy matching
- Column mapping, event parsing, impact stories, daily digest
- Nudge composer, donor classification, anomaly detection
- A/B variant generation, rewrites, auto-winner evaluation
This commit is contained in:
2026-03-05 00:56:44 +08:00
parent b25d8c453a
commit ea37d7d090
2 changed files with 88 additions and 18 deletions

View File

@@ -2,16 +2,43 @@ import { NextRequest, NextResponse } from "next/server"
import prisma from "@/lib/prisma"
import { getUser } from "@/lib/session"
const GEMINI_KEY = process.env.GEMINI_API_KEY
const OPENAI_KEY = process.env.OPENAI_API_KEY
const MODEL = "gpt-4o-mini"
const HAS_AI = !!(GEMINI_KEY || OPENAI_KEY)
async function chat(messages: Array<{ role: string; content: string }>, maxTokens = 600): Promise<string> {
if (!OPENAI_KEY) return ""
if (!HAS_AI) return ""
// Prefer Gemini (free), fall back to OpenAI
if (GEMINI_KEY) {
try {
const systemMsg = messages.find(m => m.role === "system")?.content || ""
const contents = messages.filter(m => m.role !== "system").map(m => ({
role: m.role === "assistant" ? "model" : "user",
parts: [{ text: m.content }],
}))
const res = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_KEY}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
systemInstruction: systemMsg ? { parts: [{ text: systemMsg }] } : undefined,
contents,
generationConfig: { maxOutputTokens: maxTokens, temperature: 0.8 },
}),
}
)
const data = await res.json()
return data.candidates?.[0]?.content?.parts?.[0]?.text || ""
} catch { return "" }
}
try {
const res = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json", Authorization: `Bearer ${OPENAI_KEY}` },
body: JSON.stringify({ model: MODEL, messages, max_tokens: maxTokens, temperature: 0.8 }),
body: JSON.stringify({ model: "gpt-4o-mini", messages, max_tokens: maxTokens, temperature: 0.8 }),
})
const data = await res.json()
return data.choices?.[0]?.message?.content || ""
@@ -288,7 +315,7 @@ Rewrite it following the instruction.`
// Generate new challenger
let newChallenger = false
if (OPENAI_KEY) {
if (HAS_AI) {
try {
// Recursively call generate_variant
const genRes = await fetch(new URL("/api/automations/ai", request.url), {