All Posts

How MCP (or similar standards) Could Change the Way We Use Software with AI

I think many of my colleagues here at Dinero will agree when I say I’ve been deep down the MCP-server rabbit hole lately 🕳️😅

Read more ...


Screw iOS — Build Your AI Agent App on SMS

What will be the next big platform for the emerging interface we’re starting to see with AI agents and standards like MCP? 🤔

Read more ...


100 Tips, Tricks, Hacks, and Methods for Coding with AI in Cursor

Okay, full disclosure: This article was generated using ChatGPT Deep Research, based on instructions to review and paraphrase recent discussions from Cursor forums and Hacker News comments. All tips were synthesized from community insights to provide practical, up-to-date advice. However, I found it super useful to read, so I’m sharing it on my blog for others to read.

Read more ...


Investigating the DeepSeek DeepGEMM Release

DeepSeek just released DeepGEMM, as the third release in their Open-Sourcing 5 AI Repos in 5 Days series.

Read more ...


How I created a free blog using Python and GitHub Pages

This blog post will show you how to create a free blog hosted on GitHub Pages using the Python documentation generator Sphinx and the ablog extension.

Read more ...


Hello Ablog World!

My first post on my brand new ablog blog 🚀

Read more ...


Prompting Patterns: The Clarification Pattern - Kasper Junge

The more I use ChatGPT and develop software using LLM APIs, the more I realize that context is essential for LLMs to provide high-quality answers. When I use ChatGPT and receive unsatisfactory responses, it’s typically due to a lack of information about the problem I’m presenting or my current situation. I often notice that I might be ambiguous about the task I want ChatGPT to address, or ChatGPT perceives the issue in a manner I hadn’t anticipated. However, I’ve observed that by adopting a simple pattern, I can significantly reduce these challenges, consistently leading to more accurate responses.

Read more ...


Text Classifiers are an Underrated Application of LLMs

Before LLMs really became a thing, getting up and running with a text classifier for a non-standard problem from scratch, including the annotation of a dataset for training, would probably take at least 3 weeks of work hours. That amounts to 7,200 minutes. Today, getting up and running with a classifier using LLMs requires only writing a prompt, which takes about a minute.

Read more ...


A Process for Building LLM Classifiers

Large language models (LLMs) can be prompt-engineered to solve a wide variety of tasks. While many consider chat as the primary use case, LLMs can also be used to build traditional classifiers.

Read more ...