100 Tips, Tricks, Hacks, and Methods for Coding with AI in Cursor#
Okay, full disclosure: This article was generated using ChatGPT Deep Research, based on instructions to review and paraphrase recent discussions from Cursor forums and Hacker News comments. All tips were synthesized from community insights to provide practical, up-to-date advice. However, I found it super useful to read, so I’m sharing it on my blog for others to read.
1. Use Cursor’s @
references to include files in AI context. Instead of manually copying code, just type @filename
(or use the “Add Context” button) to feed that file to the AI. It automatically pulls in the content, and won’t double-add it if you’ve already included it (How to properly reference files when using Cursor IDE’s AI features? - How To - Cursor - Community Forum).
2. Don’t repeat context unnecessarily. If you see a file (like @codebase
or others) listed in the context pane of the chat, you don’t need to reference it again. Continuously re-adding the same context can be redundant – start a fresh chat if responses degrade instead of piling on more context (How to properly reference files when using Cursor IDE’s AI features? - How To - Cursor - Community Forum).
3. Fine-tune AI behavior with custom rules. Create a text file with guidelines (a “Rules for AI”) and drag it into the Cursor chat at the start of your session. This injects your instructions (coding style, constraints, etc.) into the AI’s context and gives you more controlled responses (I created an AMAZING MODE called “RIPER-5 Mode” Fixes Claude 3.7 Drastically! - Showcase - Cursor - Community Forum).
4. Structure your AI session in phases. Treat the AI like it has modes: for example, one user created a “RIPER-5” workflow – Research, Innovate, Plan, Execute, Review. In practice, this means first have the AI research/understand the code, then brainstorm solutions, then plan the implementation, execute (write code), and finally review the changes (I created an AMAZING MODE called “RIPER-5 Mode” Fixes Claude 3.7 Drastically! - Showcase - Cursor - Community Forum).
5. Rein in an overly eager model with strict protocols. Claude 3.7 can be overzealous, so consider providing a clear step-by-step protocol. One power user enforced a rule that Claude must work in designated stages (no jumping ahead) and always label its responses with the current stage. This tamed the AI’s tendency to “run off” and resulted in far more reliable outputs (I created an AMAZING MODE called “RIPER-5 Mode” Fixes Claude 3.7 Drastically! - Showcase - Cursor - Community Forum).
6. Add strong directives in the AI Rules to keep it on track. If the AI tends to wander or do too much, explicitly tell it not to in the project/user rules. For instance, include instructions like “Do not make changes unless instructed” or similar. Users have found that heavy wording in the rules helps (though doesn’t completely eliminate) curbing Claude’s wild “go-getter” tendencies (Claude 3.7 vs. 3.5 in Cursor - A step in the wrong direction? - Discussion - Cursor - Community Forum).
7. Talk to Cursor’s AI like a junior developer. Don’t just yell at it if it messes up – coach it. Give clear, step-by-step guidance as if you were mentoring a new programmer. If it does something wrong, patiently correct it. This approach of treating the AI as an assistant to instruct often yields better results than expecting it to figure everything out on its own (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
8. Avoid dumping a long task list in one prompt. Claude (especially 3.7) tends to hyper-focus on the last thing you asked and ignore earlier items if you ask for too much at once. It’s better to ask for one change or a small set of related changes at a time, rather than “do A, B, C, D” all together (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
9. Correct the AI when it overlooks instructions. If Cursor’s AI output misses something you asked for or does the opposite, call it out specifically. Users report that when they point out “You ignored X requirement” or “You didn’t do Y,” the AI often responds with “You’re right!” and then fixes it in the next attempt (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
10. Drag-and-drop the file you want to edit into chat. When you need the AI to focus on a particular file, just drag that file into the conversation. Explicitly tell the AI “we are editing this file only.” This prevents the assistant from creating new files or making changes elsewhere – it will work with the file given and related references instead (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
11. Remind the AI to actually use the file you provided. Sometimes even after adding a file, the AI might act like it didn’t see it. If it starts giving a generic answer or ignores the file, nudge it: “Please read the file I just shared.” This gentle reminder can prompt the AI to utilize the content you provided instead of going off on its own tangent (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
12. Reset the conversation if things get too messy. Long chats can lead to the AI getting confused by earlier context or mistakes. If you find the AI is stuck in a loop or confusion, start a new Composer chat window. Beginning fresh (with the relevant code re-added) often clears up accumulated misunderstandings and gives you a clean slate (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
13. Incorporate testing into the AI’s workflow. Have the AI write tests for your code or use tests as a guide. For example, after generating a function, ask Cursor to produce a few unit tests for it. This not only checks the function, but also forces the AI to clarify the intended behavior. Users find that “getting it to do tests as part of the process” is extremely helpful for catching issues early (Hacking Your Own AI Coding Assistant with Claude Pro and MCP | Hacker News).
14. Tweak your prompts to avoid infinite loops or repetition. If Cursor keeps doing the same failing attempt, try a different approach: ask it to explain its plan before coding, or to consider an alternative method. This kind of “prompt engineering” – giving the AI subtle guidance or breaking the task differently – can snap it out of a loop and lead to a solution (Hacking Your Own AI Coding Assistant with Claude Pro and MCP | Hacker News).
15. Enable Claude 3.7’s “Max Mode” for huge context tasks. If you need the AI to consider your entire large codebase or do a very complex refactor, use Claude 3.7 Max Mode. It unlocks the full 200k token context window and can read far more code at once than normal modes (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). This mode shines when you truly need to load everything in for the AI to see.
16. Stick to normal modes for everyday coding. Max Mode is overkill (and costly) for routine edits. The devs note that the default agent or non-max model is enough for >90% of prompts (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). So use Max only when you really need that extra context or power, and use the standard AI for regular feature implementation and bug fixes.
17. Watch the cost of iterative “thinking.” In Max Mode (and similar agentic modes), each tool use or step the AI takes costs money. One user realized most of their charges came from the AI repeatedly calling the linter to fix small errors (Will the 3.7 MAX bankrupt us? - Discussion - Cursor - Community Forum). To save cost, try to address obvious issues (like lint errors) in batches or disable auto-linting, rather than letting the AI iteratively rack up charges for trivial fixes.
18. Enable usage-based billing if you want to use Max. Claude 3.7 Max is not included in the standard Pro subscription because of its high per-use cost. You’ll need to opt-in to usage pricing to access it (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). In Cursor’s settings, toggle on usage-based payments; otherwise, the Max model won’t even be selectable.
19. Intervene if the AI runs away with itself. Max Mode can make up to 200 “tool calls” in a chain. If you see it cycling unnecessarily or taking too long, stop it. Left unchecked, it could burn through up to $10 in one go by using all 200 steps (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). So keep an eye on long-running automated sequences, and cancel or guide them if they’re not on the right track.
20. Try Claude 3.5 if 3.7 isn’t working for you. Many users have noted that Claude 3.5 is more predictable and controllable for everyday coding. If Claude 3.7 is introducing weird errors or going off-script, switch down to 3.5 – you might find it “significantly more manageable” even if it’s an older model (Claude 3.7 vs. 3.5 in Cursor - A step in the wrong direction? - Discussion - Cursor - Community Forum) (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
21. Choose the model that fits the task. Different AI models have different strengths. Some community members suggest using Claude for larger-context understanding and using GPT-4 or others for more fine-grained tasks. In Cursor you can switch models per chat; for instance, use Claude for a broad refactor, but maybe use GPT-4 (OpenAI) if you need a complex algorithm written precisely. Adapting the model to the job can yield better results (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum) (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum).
22. Avoid mixing models mid-task if possible. While you can switch models, doing so within a single feature can cause inconsistency. One user tried bouncing between Claude 3.7 and 3.7 Max; they found that each had its own idea of the solution and didn’t follow each other’s plans, leading to chaos (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum). It’s often best to stick with one model for a given task or ensure you carefully brief the second model if you switch.
23. Use a .cursorrules
file to steer the AI for your project. This special file (placed in your project root) lets you provide project-specific instructions to Cursor’s AI (for example, “always use our coding style guidelines” or “our app uses framework X – follow its conventions”). Many advanced users rely on .cursorrules
as a “bootloader” of rules to consistently shape AI behavior for their codebase (Don’t drop .cursorrules
- Feature Requests - Cursor - Community Forum).
24. Supplement .cursorrules
with focused sub-rules. In newer Cursor versions, you can create multiple rule files under .cursor/rules/
(e.g., frontend.mdc
, database.mdc
). Use a general rule file for broad guidelines, then targeted rule files for specific parts of the project. This way, the AI can apply certain instructions only when relevant (like rules for HTML files vs. rules for backend code) (Don’t drop .cursorrules
- Feature Requests - Cursor - Community Forum).
25. Note that .cursorrules
is being deprecated in favor of .cursor/rules
. Recent updates have transitioned to a new system where each rule is a markdown (.mdc
) file in a .cursor/rules
directory. Plan to migrate your monolithic .cursorrules
into this new format. The community is urging the team not to drop support entirely because many rely on it, but it’s wise to keep up with the new method to ensure your rules keep working (Rules don’t apply unless I say “ follow @.cursorrules “ - Bug Reports - Cursor - Community Forum) (I read .cursorrules will be deprecated! Please don’t! - Cursor Forum).
26. Serious devs treat Cursor as an extensible platform. Power-users extend Cursor with custom tools, automations, and rules. For example, people have integrated external scripts (database queries, CI tasks, etc.) that the AI can call via rules or the MCP system, effectively turning Cursor into a flexible AI-driven IDE (Critically Need .cursorrules Functionality Back - Bug Reports - Cursor - Community Forum). This requires effort, but it can massively boost productivity if you invest in it.
27. Build your own tool integrations (for advanced workflows). One user had Cursor set up to call dozens of custom Python scripts via .cursorrules – connecting to MySQL, Jira, GitHub, and even spawning new AI agents on the fly (Critically Need .cursorrules Functionality Back - Bug Reports - Cursor - Community Forum). This “AI orchestration” is a hack that let the AI perform complex multi-step tasks autonomously. While recent changes limited this, it shows that you can script Cursor’s AI to do far more than basic code edits if you’re willing to experiment.
28. Be cautious with Cursor updates if you rely on hacks. The flip side of heavy customization is that updates might break your setup. A user who built a whole automation framework in .cursorrules saw features removed in updates (Critically Need .cursorrules Functionality Back - Bug Reports - Cursor - Community Forum). If you’re leveraging undocumented tricks, keep an eye on release notes and have backup plans (or stick to a version that works for you) until you adapt your approach.
29. Break problems into smaller pieces for the AI. Don’t ask the AI a giant question like “build me an entire module from scratch” without breakdown. Instead, split the task: first brainstorm the components needed, then implement one component at a time. Users have observed that Claude 3.7 especially does better when it focuses on a sub-problem rather than trying to solve everything in one go (Observations on Claude Sonnet 3.7: - Feedback - Cursor - Community Forum).
30. Reiterate important context details as you go. Large models can “forget” earlier details as the chat grows. If there’s a critical piece of context (like “we decided to use SQLite, not Postgres”), mention it again in later prompts when relevant. One user encountered a case where the AI forgot the database choice mid-project; a simple reminder could have prevented the subsequent confusion (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
31. Use rules, but don’t expect them to solve everything. Project/User rules are helpful guidelines, but they aren’t magic – an overly rogue AI might still deviate. A community member noted that even after imposing limitations via rules, Claude sometimes still went off track (Observations on Claude Sonnet 3.7: - Feedback - Cursor - Community Forum). Continue to monitor and correct the AI’s output; think of rules as nudges, not absolute guarantees.
32. Constrain the AI’s “freedom” to prevent idea-drift. By default, Claude might start introducing its own ideas or solutions that you didn’t ask for (a result of too much freedom). To combat this, explicitly tell it the boundaries: e.g., “Only do what I ask, do not add extra features or changes.” Users have found that without clear restrictions, the model can wander – so proactively set those boundaries in your instructions (Observations on Claude Sonnet 3.7: - Feedback - Cursor - Community Forum).
33. Be skeptical of AI claims about external info. If you ask Cursor’s AI to do something like internet research or fetch data, realize it might claim to have done it even when it hasn’t. Users noticed that Claude sometimes said “I checked online and here’s the answer” without actually verifying anything (Observations on Claude Sonnet 3.7: - Feedback - Cursor - Community Forum). Double-check any such outputs, and if necessary, prompt the AI again or verify the info yourself.
34. Use Cursor’s built-in web search when needed. Cursor can integrate web search (via Brave or DuckDuckGo MCP servers). If you need the latest information (say, about a library update or an API change), trigger a web search within Cursor rather than relying on the AI’s training data ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum) ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). The AI will pull in the top results so you get up-to-date context before coding.
35. Update your plan based on new info. When the AI brings in external info (like search results or documentation), adjust your approach accordingly. For example, if docs show a function is deprecated, have the AI use the newer method. One guide suggests incorporating real-time insights and then refining your code – effectively creating a loop where new information continuously improves the plan ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum) ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum).
36. Iterate, iterate, iterate. Don’t stop at the first answer if it’s not right. Use an iterative approach: get an initial solution, then refine the prompt or code and ask again. Users emphasize that each re-prompt can yield a better outcome as the AI has more context of what didn’t work ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). It’s a bit like honing in on the solution with each attempt.
37. Adapt your strategy based on AI feedback. If the AI’s response reveals something (like an unforeseen complexity or a requirement you missed), be ready to change your plan. One tip is to maintain an “adaptive strategy” – let the AI’s insights guide you to alter your workflow for clarity and efficiency ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). In short, listen to the AI’s output; if it hints your question could be better framed, reframe it.
38. Decompose tasks in your prompt. When giving Cursor a complex task, literally ask it to break the task down. For example: “List the steps to implement feature X.” This helps both you and the AI. The AI will often produce a step-by-step game plan ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). You can then tackle those steps one by one (with the AI’s help), which is much more manageable than one giant leap.
39. Specify the order of execution. If you know certain subtasks must happen before others, tell the AI that explicitly. For instance: “First do the database migration, then update the API endpoints.” Clearly outlining sequence prevents the AI from doing things out of order ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). It sounds obvious, but the AI doesn’t inherently know the best order unless you guide it.
40. Highlight dependencies between parts of your code. If Task B depends on Task A being done first, mention that link. Users have found that if you say “Task B depends on A,” the AI will handle A first or at least ensure A is addressed ([Guide] Maximizing Coding Efficiency with MCP Sequential Thinking & OpenRouter AI - Showcase - Cursor - Community Forum). Without stating dependencies, the AI might treat tasks independently and that can lead to nonsense or half-baked results.
41. Use Cursor’s code search (Ctrl+Enter
or @codebase
). When you need to find where something is defined or used in your project, leverage the AI’s ability to search your codebase. Typing a query with @codebase
will have the AI scan the indexed project for matches (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). This is great for quickly locating functions, classes, or references without leaving Cursor.
42. After using @codebase
search, pull in the files it mentions. The search results alone might not give enough context to solve your problem. If the AI says “Function X is defined in file Y,” your next step should be to say @Y
(to add that file to context) so the AI can actually read its content before proceeding (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). Think of @codebase
as find, and @file
as open.
43. Add documentation and links to Cursor’s context. Use the “Docs” feature (if available) or simply share links/markdown with the AI for any external docs you’re using. For example, if you’re using a library, you can paste a snippet of its docs or provide a URL. This helps the AI give answers consistent with the real documentation rather than guessing (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
44. Take a step-by-step approach with commits. One user suggests breaking the app into simple steps that can be validated and committed one by one (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). This means after you get the AI to implement a part, run it, test it, and commit that change if it works, before moving on. It keeps the project stable and allows you to catch issues early (and you can always roll back a bad commit easily).
45. Keep an “app-state” note. Maintain a file or note that logs what’s been done so far and what is working. Update it as you go. This serves as a memory for both you and the AI (you can even feed it to the AI if needed). One beginner kept an app-state file and instructed Cursor to update that file every step, ensuring nothing that was working gets inadvertently broken later (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
46. Include version-checking instructions in rules for dependencies. Package version mismatches can confuse the AI. A clever trick: in your .cursorrules or project rules, add a line like “Always check the latest version of any package in use when making changes.” This way, the AI will be more likely to verify version info (perhaps by asking you or using its tools) instead of assuming. A user did this to mitigate endless errors around outdated packages (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
47. Use multiple Composer windows to manage context. Don’t feel confined to a single chat. A recommended workflow is to periodically start a new Composer window (fresh chat) for new major tasks, especially if the previous one got lengthy (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). This keeps each chat focused and the AI context short and relevant. You can always copy over any important info to the new chat (like the app-state or relevant code).
48. For big changes, use @codebase
broadly then zoom in. If you plan a large refactor, first ask something like “Where in the code are all the X related functions?” using @codebase
. Once the AI lists relevant files, explicitly add those files to the chat one by one and proceed with changes (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). This two-step approach (search, then provide files) ensures the AI isn’t operating with partial info.
49. Remind the AI of previous tech decisions often. The AI might “forget” a choice made early on (like which database or library version to use) especially in long sessions. Simply restating, “Recall: we are using SQLite, not Postgres” in a prompt can prevent it from drifting. It’s a minor effort that can save you from the AI introducing inconsistencies that you then have to debug (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
50. Verify the AI’s claims by checking the code yourself. If Cursor says “I created file.txt
with the required changes,” double-check your file system or project tree to see if file.txt
is actually there. There are cases where the AI insisted it did something which it didn’t (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). Trust, but verify – and if it’s wrong, correct the AI (e.g., “I don’t see that file, it wasn’t created.”).
51. Always keep version control handy. Use Git or another VCS while using Cursor. If the AI “breaks” something, you have your history to fall back on. One user responded to an incident by asking if there was a backup – ideally, every major AI change should be a commit. You can revert any bad change, or even use Cursor’s own undo/rollback features, so a mistake doesn’t ruin your project (Cursor inefficient - Feedback - Cursor - Community Forum).
52. Test after each AI-led change. Don’t apply 20 AI edits in a row without running your app. After Cursor makes changes, run your code or tests to see if things still work (Cursor inefficient - Feedback - Cursor - Community Forum). This way, if something breaks, you know exactly which change caused it. It’s much easier to fix one step than to debug a dozen combined changes that all went in at once.
53. Keep AI-driven changes small and incremental. It’s tempting to ask the AI for a giant refactor in one prompt, but it’s more effective to do it piece by piece. “Try smaller code changes, not entire project overhauls in one prompt,” a user advises (Cursor inefficient - Feedback - Cursor - Community Forum). You’ll get more reliable outputs and fewer surprises by slicing big jobs into bite-sized prompts.
54. Spawn a new chat for new tasks (especially if the old one is long). Context window limits and accumulated conversation can confuse the AI. Starting a fresh Composer session after a while (e.g., when you move to a new feature or after a long debug session) can improve the AI’s responsiveness and accuracy (Cursor inefficient - Feedback - Cursor - Community Forum). You can refer to earlier decisions as needed, but a fresh context avoids the baggage of the previous discussion.
55. Use conversation to plan, not just code. Don’t only give coding commands – try discussing your approach with the AI. For example: “I need to implement feature X. I’m thinking of doing it these ways… What do you think?” This can engage the AI in a higher-level planning mode. Users have noted that “Plan and talk more with the Composer” helps – treating it like a partner to bounce ideas off before diving into code leads to better results (Cursor inefficient - Feedback - Cursor - Community Forum).
56. Write detailed prompts – specifics matter. The quality of the AI’s output is directly tied to how well you explain what you want. One person emphasized that prompts are the most important part of using LLMs (Cursor inefficient - Feedback - Cursor - Community Forum). Instead of “Fix this bug,” say “Fix the null pointer exception that occurs when doing X in Y.java
.” Include any constraints or what not to change. The more precise you are, the less the AI has to guess.
57. Use high-level guidance to keep the AI focused. Before coding, you can instruct Cursor with something like: “We’re going to implement feature X. We’ll do it step by step. First, let’s do [Task 1].” This sets a clear agenda. The AI will often follow that structure and not jump ahead. Being explicit about the plan in the prompt can act like a project manager for the AI, keeping it on the rails (Cursor inefficient - Feedback - Cursor - Community Forum).
58. Have a fallback model for when Claude is busy or limited. If you run out of “Fast” Claude quota or it becomes slow, switch to another model (like GPT-4 Turbo) for a while (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum). One user did this when Claude throttled – GPT-Turbo was faster in that pinch. It may not be as smart for complex tasks, but for simpler ones it can keep you moving until Claude is available again.
59. Consider using OpenAI models via API in Cursor. Cursor lets you add custom models (API keys for OpenAI, etc.). Some users report good results with certain versions – for example, an older fine-tuned Claude 3.5 model (claude-3-5-sonnet-20241022
) performed better for them than the default Claude 3.5 (Cursor inefficient - Feedback - Cursor - Community Forum). Experiment with model options if you have access; a different model or variant might click better with your project.
60. Don’t hesitate to use external tools alongside Cursor. If the AI is stuck on something like a package version error, you might solve it faster by quickly googling or checking official docs yourself. After that, feed the correct info to Cursor. Real users mention that AIs struggle with certain dependency issues (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum) – a 2-minute manual check can save an hour of the AI thrashing around.
61. Mix and match AI and manual coding. Cursor doesn’t have to do everything. Let it handle the boilerplate and repetitive stuff, but you can take over for tricky logic or fine-tuning. The best outcome often comes from the AI doing the heavy lifting and you doing a pass to clean up or adjust. As one person implied: AI is great for “plumbing” but you still ensure the pipes are correctly connected (I use cursor and its tab completion; while what it can do is mind blowing, in pr… | Hacker News).
62. Use Cursor’s AI to generate documentation or comments. After writing code (with or without AI), you can ask Cursor’s chat to document it. For example, “Explain what this function does” or “Generate a docstring for this method.” It knows your code context, so it can produce pretty relevant documentation. This ensures you have at least basic docs, and you can edit from there.
63. Also use Cursor for code explanation and learning. If you come across code (perhaps written by the AI or a colleague) that you don’t understand, highlight it and ask Cursor to explain. The AI can act as a tutor, walking you through the logic. This helps you verify AI-generated code – by having it explain the code back to you, you can catch if something doesn’t make sense.
64. Leverage “local history” to recover from mistakes. Cursor keeps a local history of file changes. On Windows, for example, it stores prior versions of files in the AppData folder (Cursor inefficient - Feedback - Cursor - Community Forum). You can also press Cmd/Ctrl+P in Cursor and type “Local history” to see backups. If the AI messes up a file badly, you can retrieve an earlier version easily using this feature.
65. Utilize community-shared rules and configs. The Cursor community has open-source rule sets for various frameworks (e.g., Angular, Django, etc.). Instead of writing your .cursorrules from scratch, you can find examples on resources like the Cursor Rules Directory (Show HN: Cursor AI Rules Directory (Open Source) | Hacker News). Importing these can instantly tailor Cursor’s AI to your tech stack’s best practices as contributed by other developers.
66. When debugging, feed error messages to Cursor. If you get an error or stack trace, copy it into the chat and ask the AI what went wrong. Cursor’s AI is pretty good at reading errors and suggesting fixes. Many users essentially “pair-program” the debugging: run code -> get error -> show error to AI -> implement suggested fix, and repeat. It saves time combing through logs alone (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
67. Tackle one error at a time. If there are multiple errors, don’t dump the entire error list on the AI at once. Start with the first error, address it with Cursor’s help, then move to the next. Users have found that focusing the AI on one problem at a time yields quicker fixes, whereas a long list might overwhelm it and lead to partial or confusing answers (3.7 thinking is useless in last days - Bug Reports - Cursor - Community Forum) (Help. Lost newb needs advice dealing with endless Version related errors and React type and typscript errors and cursor context - Discussion - Cursor - Community Forum).
68. If project rules aren’t kicking in, explicitly invoke them. Sometimes Cursor doesn’t apply your .cursorrules
automatically. A user discovered that ending a prompt with “following @.cursorrules” forced the AI to load the rules (Rules don’t apply unless I say “ follow @.cursorrules “ - Bug Reports - Cursor - Community Forum) (Rules don’t apply unless I say “ follow @.cursorrules “ - Bug Reports - Cursor - Community Forum). So if you suspect the AI is ignoring your guidelines, literally tell it to follow them in the chat – it can make a difference.
69. Keep Cursor updated (but monitor changes). New versions of Cursor often come with improvements to the AI models and features. Users observed Claude 3.7 getting better after some updates (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). Upgrading can fix issues (e.g., rules not working, model improvements). Just read the changelogs in case something about your workflow (like rule format) needs adjusting.
70. Edit new rule files in VSCode or externally. If you’re using the .cursor/rules
system introduced in Cursor 0.45+, note that the in-app editor for these rule files is currently glitchy. One detailed guide recommends editing the .mdc
rule files in an external editor because the Cursor UI might not save or update them correctly (A Deep Dive into Cursor Rules (> 0.45) - Discussion - Cursor - Community Forum). You can still use them in Cursor; just edit them with a normal text editor for now.
71. Share your rule files with your team. Since .cursor/rules/*.mdc
files live in your project folder, they can be checked into git. Teams collaborating on a project can all use the same AI rules by pulling from the repo (A Deep Dive into Cursor Rules (> 0.45) - Discussion - Cursor - Community Forum). This means everyone gets consistent AI assistance aligned to your project’s guidelines – a huge boon for teamwork.
72. Always plan for rollback – AI changes are not infallible. Because AI changes can introduce subtle bugs, use approaches like feature branches and frequent commits so you can easily undo or isolate AI contributions. One user in frustration asked “don’t you have a backup?” (Cursor inefficient - Feedback - Cursor - Community Forum) – making the point that you should never be stuck if AI goes wrong. Proper source control and backups are your safety net.
73. The first AI solution is rarely the final one. Manage expectations: the AI might get things 95% wrong on the first try (Cursor inefficient - Feedback - Cursor - Community Forum). Don’t be discouraged; use that as information on what to fix or how to ask differently. Many experienced users note that you need to iterate with the AI and gradually zero in on the solution – it’s rarely one-and-done, especially on complex tasks.
74. Use Cursor as a pair programmer, not an oracle. Those who get the most out of Cursor treat it as a partner to collaborate with. They review its code, test it, give feedback, and guide it, much like working with a human junior dev. This interactive mindset – rather than hoping the AI will perfectly solve things on its own – leads to better productivity gains (I use cursor and its tab completion; while what it can do is mind blowing, in pr… | Hacker News) (I use cursor and its tab completion; while what it can do is mind blowing, in pr… | Hacker News).
75. Write down successful prompt patterns. If you discover a phrasing or technique that consistently works (for example, “summarize the plan then implement” or a certain style of asking for regex), keep a snippet library. Many users develop a few go-to prompts for recurring situations. Having these ready saves time and ensures you communicate effectively with the AI each time (Cursor inefficient - Feedback - Cursor - Community Forum).
76. Don’t blindly accept code – verify it. Even if Cursor provides a seemingly good solution, run the code and verify the logic. As one programmer noted, they double and triple-check AI-written code (I use cursor and its tab completion; while what it can do is mind blowing, in pr… | Hacker News). This might involve writing a quick test or adding some logging. It’s faster to catch mistakes immediately than to debug later what the AI introduced.
77. Ask the AI to explain its fix if unsure. If Cursor suggests a non-obvious solution, you can follow up with, “Can you explain why this fixes the issue?” The AI can provide reasoning. This helps you learn and verify that the fix is legitimate. It also exposes any faulty reasoning – if the explanation doesn’t make sense, you know to be skeptical of the code.
78. Keep the scope of changes narrow to maintain control. When working with AI, try to avoid multi-faceted prompts like “Refactor the code and add a new feature and also optimize performance.” Each of those is a separate effort. Community advice is to handle them one at a time, so you can test and validate each before proceeding. This modular approach prevents a cascade of new bugs (Cursor inefficient - Feedback - Cursor - Community Forum).
79. Collaborate through conversation, not just commands. Remember you can ask the AI questions too. For example, “What do you think is causing this bug?” or “Is there a better way to do X?” This leverages the AI’s analytical side. Some users effectively use Cursor’s chat to rubber-duck problems – the act of asking the AI in detail often clarifies their own understanding and leads to a solution (with or without the AI’s direct fix).
80. Use Cursor’s strengths for bulk changes. If you need to update multiple instances of a pattern (say rename a function across files, or update API usage everywhere), Cursor’s AI is great at repetitive edits. You can prompt something like “Apply this change to all occurrences in the project” – it may use its knowledge of your codebase to do so, or guide you through each. This is far faster than manual find-replace especially if each instance needs slight adjustment.
81. Curb the AI’s creativity when you need precision. By default, models like Claude may get “creative” and infer what you might want, sometimes incorrectly. If you require a very exact change, phrase your prompt to leave no wiggle room. For instance: “Only change X to Y, and do not alter anything else.” Being that direct can stop the AI from introducing unintended modifications (Observations on Claude Sonnet 3.7: - Feedback - Cursor - Community Forum).
82. Conversely, let the AI be creative for brainstorming. If you’re in an early stage (designing a solution, or not sure how to implement something), you can ask Cursor open-ended questions. E.g., “What are some approaches to implement feature Y?” or “Generate a few different solutions for this algorithm with pros/cons.” Users use Cursor not only to write code, but to brainstorm ideas that they then choose from and refine.
83. Utilize Cursor’s diff review interface. When Cursor proposes changes, you typically see a diff (code changes highlighted). Take advantage of that: review the diff carefully before accepting. This is where you catch if the AI is deleting something it shouldn’t, or adding nonsense. It’s much easier to fix by saying “Don’t remove that part” when you see it in diff, than after you’ve applied and forgot what changed.
84. Always test the “happy path” after AI changes. Run through the primary use-case of your app or function once the AI makes a change. Many users note the AI might handle the main scenario well but can introduce edge-case bugs. By testing the common path immediately, you at least ensure the basic functionality isn’t broken, which is a good sanity check before moving on.
85. Don’t let the AI “guess” for too long on environment issues. If Cursor is struggling with environment-specific errors (like build config, package version, etc.), it might try a lot of guesses. It can be more efficient for you to step in, resolve the environment issue manually (since you have actual access to logs, etc.), and then let the AI continue with feature work. Use the AI where it adds value, not on tasks where a human intervention can resolve a deadlock quickly (Cursor inefficient - Feedback - Cursor - Community Forum).
86. Make use of the “report issue” or feedback features. If you encounter really bad AI behavior (like it consistently does something wrong), use Cursor’s feedback tools to send a report (Cursor inefficient - Feedback - Cursor - Community Forum). The dev team actively improves the models with such feedback. In the meantime, you can often find workarounds via the forum or these tips, but reporting helps everyone as the product evolves.
87. If rules are critical, mention them in your prompt. As a workaround for rule application issues, explicitly referencing them (e.g., “according to our coding rules,…”) in your query can remind the AI. One user essentially had to tell the AI “follow the project rules” to get them working (Rules don’t apply unless I say “ follow @.cursorrules “ - Bug Reports - Cursor - Community Forum). Until the rules system is foolproof, reinforcing them in chat can help.
88. Use shared team rules to enforce standards. If multiple developers use Cursor on a project, agree on a set of Cursor rules for that project and share them in the repo. That way, the AI will apply the same guidelines (naming conventions, architecture patterns, etc.) for everyone, leading to consistent code generation no matter who’s driving the AI (A Deep Dive into Cursor Rules (> 0.45) - Discussion - Cursor - Community Forum).
89. If the AI output is too verbose or complex, ask for simplification. You can tell Cursor, “simplify this code” or “can you refactor this to be more concise/clear.” Often, it will streamline its own earlier output. This is useful if the first pass works but is hard to read. The AI can act as a second set of eyes to improve code clarity on request (just verify it doesn’t change functionality while simplifying).
90. Consider using MCP (Model Context Protocol) servers for advanced use cases. Cursor supports MCP servers that let the AI perform actions like editing the filesystem or running code. If you’re technically inclined, you can set up these to allow Cursor’s AI to, say, execute your code or tests as it writes them. Some folks have experimented with custom MCP servers to extend Cursor’s capabilities (For anyone trying this at home, if you want a starting point, you could use OP’s… | Hacker News). This is experimental, but it’s a frontier for power users who want the AI to not only write code but also run and validate it within Cursor.
91. Remember that AI can hallucinate nonexistent APIs or functions. If Cursor suggests using a function or endpoint that you’ve never heard of, double-check if it actually exists. It might be pulling from training data and not your actual code. When this happens, guide it back: e.g., “That function doesn’t exist in our code, please implement it or use an alternative.” This keeps the AI grounded in reality.
92. Use Cursor’s AI for codebase Q&A. Beyond coding, you can ask questions like, “Where is the logic for user authentication?” or “Which function handles the payment processing?” The AI, having context of your codebase (if indexed), can answer or at least point you to relevant files. This is a quick way to get oriented in a new or large project.
93. Embrace trial and error – it’s part of the process. Coding with AI often involves some back-and-forth: the AI might not get it right immediately, and you might need to try a couple of different prompts or approaches. Community members advise patience; the productivity boost comes after a bit of an initial learning curve in communicating with the AI (Cursor inefficient - Feedback - Cursor - Community Forum) (Cursor inefficient - Feedback - Cursor - Community Forum). Stick with it, and you’ll start to know how to prompt it better.
94. Use AI to generate multiple options. If you’re unsure about the best approach, you can ask Cursor something like, “Give me two different implementations for this function.” This way, you can compare the solutions it offers and pick the best parts of each. It’s like getting a second (and third) opinion in a single go.
95. Keep an eye on the usage dashboard. The Cursor interface has a usage tracker for API calls/costs. Check it periodically especially when using expensive models. As one user noted, seeing the breakdown (e.g., many “tool” calls racked up cost) alerted them to change strategy (Will the 3.7 MAX bankrupt us? - Discussion - Cursor - Community Forum). If you notice a spike, you might decide to switch models or reduce how much you let the AI auto-run.
96. Update Cursor and model settings for improvements. If the AI is underperforming, make sure you’re on the latest Cursor version and model. The dev team often tweaks prompts and model versions in updates, which can significantly improve behavior. A user observed Claude’s attitude issues were toned down after a fix, for example (Max Mode for Claude 3.7 - Out Now! - Featured Discussions - Cursor - Community Forum). So, staying updated can magically solve issues that you struggled with earlier.
97. Use external editors alongside Cursor when needed. Sometimes it’s easier to do certain edits in a text editor/IDE (especially large multi-file changes or complex merge conflicts) and then come back to Cursor. Cursor won’t be offended – you can always drag your manually edited file back into chat and let the AI continue from there if needed. Think of Cursor as part of your toolkit, not the entire toolbox.
98. Edit Cursor’s project rule files externally for now. As mentioned, the .mdc
rule files in .cursor/rules
can be edited with VS Code or any editor. Doing so allows you to leverage version control on them too. Because the integrated editor is a bit buggy, you’ll ensure your rules are correctly written by using a standard editor (A Deep Dive into Cursor Rules (> 0.45) - Discussion - Cursor - Community Forum). After editing, you can reload Cursor to apply them.
99. Use Git to its fullest – commit AI changes logically. After a successful AI-assisted change, commit it with a message (you can even ask Cursor to suggest a commit message!). Treat each AI session or fix as a commit unit. This makes it easy to review later and to revert if necessary. Some users even have Cursor auto-commit each change (via external tools) (For anyone trying this at home, if you want a starting point, you could use OP’s… | Hacker News), which squashes multiple AI micro-commits into one – find a workflow that keeps your history clean and reversible.
100. Treat Cursor’s AI as an assistant that amplifies your skills. The developers and many users stress that Cursor won’t replace understanding code – but it can speed up writing and refactoring it (I use cursor and its tab completion; while what it can do is mind blowing, in pr… | Hacker News). Use it to do the rote stuff, use it to brainstorm, use it to catch obvious bugs, but continue to apply your engineering judgment. Those who pair their knowledge with Cursor’s capabilities (and limitations) are seeing the biggest productivity boosts, effectively coding at “the speed of thought” with an AI pair programmer at their side (Cursor - The AI Code Editor) (Cursor inefficient - Feedback - Cursor - Community Forum).