"Design a local AI coding setup for me.
Hardware: [CPU/GPU/RAM]
Operating system: [OS]
Constraints: [privacy, offline use, budget]
Typical tasks: [simple coding help, reviews, refactors, chat]
Recommend:
1. the best local model size I can realistically run
2. whether Ollama or LM Studio is the better starting point
3. what tool integration I should use
4. which tasks should still go to cloud models if policy allows"Running Local Models — Ollama and LM Studio
Run AI models on your own machine — privacy benefits, performance tradeoffs, setup guides, and when local makes sense.
Prompt FAQ
Questions to answer before you paste it
When should I use the Running Local Models — Ollama and LM Studio prompt?
Run AI models on your own machine — privacy benefits, performance tradeoffs, setup guides, and when local makes sense. Use it when you need a safer starting point than a blank prompt and you want the agent to stay inside explicit constraints.
Should I paste this prompt exactly as written?
No. Treat it as a safe starter. Replace the task, files, constraints, and verification details with your actual context before you run it.
What should I do after the agent answers?
Read the diff, run the checks, and stop after one reviewable step. If you need deeper context, open the lesson that explains the reasoning behind the prompt.
Related prompts worth copying next
OpenAI vs Anthropic vs Open Source — An Honest Comparison
Compare the major AI model providers — strengths, weaknesses, pricing, and when to use which.
Open prompt pageAPI Keys, Rate Limits, and Cost Management
Understand AI API pricing, rate limits, budgets, and how to monitor and control your AI spending.
Open prompt pageBolt.new and Lovable — Instant Full-Stack Apps
Build complete web applications from a text prompt with Bolt.new and Lovable.
Open prompt page