logo
TIPS

Building a Self‑Updating AI Content DSL in Rails

by Daniel Steele
10 July, 2025

Why We Needed a Self‑Updating Content System

At eola, our platform deals with dynamic data – things like bookings, activities, and user profiles that constantly change. We wanted our user-facing text (descriptions, highlights, taglines, etc.) to always stay in sync with these changes. Manually updating text every time data changes wasn’t scalable (or fun!). We envisioned a system where content could update itself whenever relevant data in our models changed – almost like teaching our Rails app to write its own content.

This idea posed a unique challenge. Rails provides callbacks (like after_save or after_commit) that can run code when a model changes. But using raw callbacks across dozens of models would become unmanageable and error-prone. We needed something more declarative and flexible – ideally a way to simply declare which content should update when certain data changes, without writing a bunch of repetitive callback code.

Existing Approaches (and Why They Fell Short)

We researched what was already out there, but no existing gem or Rails feature could do exactly what we needed:

  • Rails Observers (now deprecated): Rails used to have an Observer pattern for monitoring model events, but it still required writing Ruby classes for each model and wasn’t as granular or easy to configure for our needs. We didn’t want a forest of observers listening everywhere with complex logic.
  • ActiveRecord Callbacks: Built-in callbacks like after_update_commit could be used on each model to trigger content updates. However, wiring up callbacks for every model and every association change quickly becomes messy. There was no straightforward way to say “if anything in this association changes, update that model’s content” without writing custom code for each case.
  • Reflection & Metaprogramming: Rails ActiveRecord Reflection can introspect associations at runtime (via methods like reflect_on_all_associations). This is typically used for meta-programming tasks (like building forms or serialisers). We realised we could leverage this to generalise our solution – but doing so in a robust way (e.g., figuring out parent-child relationships on the fly) isn’t something we found in any tutorials or gems. It was uncharted territory, so we knew an internal solution would require some R&D.
  • Off-the-shelf AI or CMS: What about just using an AI service or a CMS rule engine? A generic AI model wouldn’t understand our domain relationships out of the box, and training it extensively was not feasible. And no content management system operates at the database event level to update text the moment a booking or schedule changes. We needed a tight integration with our Rails app’s logic.

In short, we had to build this ourselves. We set out to create a domain-specific language (DSL) within our Rails app that could declaratively connect model changes to content updates. This DSL would act as a high-level configuration layer on top of Rails, letting us describe what to update when, and handle the how under the hood.

Introducing the AITextable DSL

We created an internal module called AITextable – a mixin (Rails Concern) that, when included in a model, lets that model declare self-updating AI-generated text fields. Here’s a simplified example of how it looks in a model:

class Outlet < ApplicationRecord
  include AITextable

  ai_text_is_maintained_for :description_ai_text, :meta_description_ai_text
    .and_updates_on_changed :name, :email, relations: [:address, :categories]
    .debounce_interval 1.hour
    .skip_if { |outlet| outlet.draft? }
end

Breaking this down:

  • ai_text_is_maintained_for ...: We list the text fields that should be automatically maintained by AI. In our convention, these are actual model attributes (columns) which end with _ai_text. For example, description_ai_text might be a field holding an AI-generated description for the Outlet. This call also ensures these fields are set up for translations (we allow them to be stored in multiple languages).
  • .and_updates_on_changed ...: Here we declare the triggers that should cause the AI text to regenerate. This can include simple attributes (like name or email) and also associations. In the example, changes to the Outlet’s own name or email will trigger an update, and changes to related models (address or any categories associated with the Outlet) will also trigger an update. Under the hood, the DSL uses this to install the necessary hooks on those related models (more on that soon).
  • .debounce_interval 1.hour: This sets a delay so that if changes happen in quick succession, we only update the AI text after things settle. In production we might use a shorter delay (e.g. one hour) and use a longer interval in development to avoid spamming the AI during testing. Debouncing helps batch rapid changes into a single update event.
  • .skip_if { ... }: This is a safeguard – a condition under which we skip updating the AI text. In our case, for example, we might skip if the Outlet is marked as a draft or if AI content generation is disabled for it. It’s essentially a way to opt-out certain records from auto-updates (or to prevent updates during certain states).

With these few lines, the model is now “AI-textable”: whenever the specified attributes or relations change, the system will regenerate the AI text fields for that model automatically.

Under the Hood: How It Works

So what magic is AITextable doing behind the scenes to make this happen? We had to tackle several challenges in the implementation:

1. Tracking Changes with Callbacks (the easy part)

When a model includes AITextable, it gets a standard after_commit callback on updates:

included do
  after_commit :enqueue_ai_text_update, on: :update
end

This means after an Outlet is updated in the database, enqueue_ai_text_update is called. That method checks if any of the tracked fields or associations actually changed (to avoid unnecessary work) using Rails’ dirty change tracking. If a relevant change is detected, it will enqueue a background job to generate the new text:

def enqueue_ai_text_update
  # ... check skip_if and changes ...
  UpdateAITextJob.set(wait: debounce_interval).perform_later(self.class.name, id)
end

We always perform the actual text generation asynchronously (via UpdateAITextJob) so that we don’t slow down the web request. By scheduling it with a delay (wait: debounce_interval), we ensure we batch rapid changes – e.g., if a user updates 3 fields one after another, we prefer to regenerate content once, a short time later.

(Side note: The job ultimately calls our AI text generator service, which uses GPT under the hood to produce the content. The focus here, though, is the Rails integration around it.)

2. Hooking into Associated Models Automatically

The really interesting part is how we handle association changes. For example, if an Activity belongs to an Outlet, and the Outlet’s AI text should update whenever any associated Activity changes, how do we make that happen without writing custom callbacks on Activity?

We achieved this by using Rails’ reflection API to dynamically attach callbacks to associated models:

  • First, when and_updates_on_changed is called with a list of relations, we iterate over each relation name and fetch its ActiveRecord reflection via reflect_on_association. This gives us metadata about the association (type, class, inverse, etc.).
  • We then get the associated class (say Activity class for relation :activities on Outlet) and use class.after_commit to define a callback on that class. In the callback’s block, we resolve the “parent” Outlet(s) that the changed child belongs to, and enqueue update jobs for them.

For example, in pseudo-code:

assoc_class.after_commit on: :update do
  parent_outlets = resolve_parents(self, Outlet, reflection)
  parent_outlets.each { |outlet| outlet.enqueue_ai_text_update(skip_check: true) }
end

The resolve_parents method contains logic to find the correct parent records depending on the association type:

  • For a belongs_to association (e.g., Activity belongs_to Outlet), we can find the parent by foreign key – essentially “find the Outlet where outlet_id == self.id”.
  • For a has_many or has_one association, we look for the inverse relation. Rails’ reflections tell us the name of the inverse if it’s set. If not, we fall back to conventions (e.g., an Outlet likely has many activities, and Activity belongs to Outlet, so the inverse of Outlet’s :activities could be :outlet). Using that, we call something like record.outlet to get the parent.
  • We decided to exclude has_many :through associations for now, because tracing through join models generically was complex (and rarely needed for our use case). The DSL will raise an error if you try to use a through-association as a trigger.

This dynamic hooking means we don’t have to hard-code anything in the child models. When Outlet declares relations: [:categories, :address], the AITextable concern goes and attaches an after_commit on the Category model and on the Address model automatically. If those records change, they look up their parent Outlet(s) and enqueue the update job. It’s a form of reactive programming in Rails – our data models “listen” to each other with minimal manual wiring.

<small>(Under the hood, we also mark each hook with a flag to avoid installing the same callback twice. For example, if two different models set up hooks on Category, we only want to add the after_commit hook on Category once. We use a simple instance_variable_set on the class as a flag to track this.)</small>

3. Keeping Translated Content in Sync

Our platform supports multiple languages, and translatable fields are stored in a separate Translation model. This added another wrinkle: if someone updates, say, the French translation of an Outlet’s name, that should also trigger the AI-generated text to update (since the content might include the name).

To handle this, AITextable also injects hooks into the global Translation model:

  • Own translations: For each model, we hook into translations of its own trigger fields. Using after_commit on Translation, we check if the translation’s model_type matches our model (e.g., "Outlet") and if the translated field is one of the triggers we care about (say "name" or "description"). If yes, we enqueue an update for that record. This way, editing a field via the translation table has the same effect as editing it directly.
  • Related model translations: Similarly, if a related model’s translated field changes, we want to update the parent. For example, if an Activity has a translated title and Outlet is listening to Activity changes, updating an Activity’s title in Spanish should update the Outlet’s AI text. Our hook on Translation checks if the model_type of the translation matches one of the associated classes we’re watching, and whether the field is in that parent’s trigger list. If so, it finds all the parent records (using the same resolve_parents logic) and enqueues updates for them.

These translation hooks were a bit tricky to get right, especially to avoid false positives. We made sure to ignore changes to the AI text fields themselves (to avoid infinite loops, since the AI text fields are also stored as translations). We also handled Single Table Inheritance (if any) by checking subclass relationships when matching model types. The end result is that whether a field is changed directly or via our translation system, the appropriate content updates still fire.

4. Preventing Infinite Loops and Redundant Updates

Whenever you have callbacks that trigger other callbacks, you must be careful to avoid infinite loops. In our case, the AI text update is itself an update to the model (we’re writing new content back to a field on the record), which could recursively trigger another update.

To solve this, we employed a few strategies:

  • No self-triggering: We never configure an AI text field to trigger itself. The DSL validation prevents you from listing the AI-maintained fields as trigger fields. For example, if :description_ai_text is maintained by AI, you cannot also put :description_ai_text in the and_updates_on_changed list. This avoids the obvious loop of “update AI text -> triggers update -> update AI text -> ...”.
  • Skip flag for content updates: When our background job eventually generates new text, we update the model’s AI text fields without firing the normal callbacks. One way to do this is using lower-level methods (like update_columns in Rails, which bypasses callbacks) when we save the AI content. We also pass a flag (skip_check: true) in cases where we programmatically enqueue an update, to tell the enqueue_ai_text_update method not to double-check changes. Essentially, if we already know we need to update, we skip the ai_text_should_update? check to avoid any chance of missing it.
  • Debounce & dedup within transactions: As mentioned, we use a debounce delay to group rapid changes. Furthermore, we implemented a simple per-request deduplication mechanism. If multiple triggers fire during one web request or transaction (quite possible if you save a bunch of associated records at once), we mark the record as "already enqueued" so we only queue one job. This is done with a thread-local cache (using Rails’ CurrentAttributes) that keeps track of which records have had jobs enqueued in the current request lifecycle. The first trigger wins; subsequent triggers see the flag and do nothing. This prevents flooding the job queue with duplicate work.
  • Batching frequent events: In testing, we simulated scenarios like a burst of 10 bookings being created in a short time. We verified that thanks to the debounce, those would result in only one AI text regeneration. Tuning the debounce interval was important – too short and we’d still do extra work, too long and the content would lag behind. In production we chose a moderate interval (e.g. an hour for certain heavy updates) to balance freshness vs. efficiency.

5. Putting It All Together

Once the DSL was in place, adding a new self-updating text rule became much simpler. For example, we later added a feature to auto-generate taglines like “Most popular in October!” for certain pages. With AITextable, this was as easy as adding the new _ai_text field and updating the DSL declaration in the model to include the relevant trigger (e.g., maybe it updates when monthly booking counts change). No need to write custom service objects or callback classes – the existing DSL hooks take care of watching the right things and firing the update job.

We also built a comprehensive test suite to ensure these updates happen as expected. For instance, we have tests that verify:

  • Changing a tracked field enqueues an UpdateAITextJob.
  • Changing an unrelated field does not enqueue a job.
  • Updating a child model (like an Activity) triggers a job for the parent Outlet.
  • Changing a translation of a tracked field triggers a job.
  • Changing a translation of an unrelated field does not trigger anything.
  • If two fields change at once, we still only enqueue one job.

This gave us confidence that our DSL was doing the right thing and not causing any surprises.

Results and Reflections

After deploying this system, we observed several benefits:

  • Content Freshness: Dozens of pieces of content across our site became self-maintaining. For example, an outlet’s page can always say “updated X days ago” or show live availability info, without anyone manually editing it. The content stays accurate to the underlying data in near real-time. This improves the user experience and trust, since the platform’s information is always up to date.
  • Developer Efficiency: Implementing dynamic content logic used to take days of work and careful testing of callbacks. Now it’s often just a few lines in the model. Over a few months, we added around 5 new AI-driven content features via the DSL – each in a matter of hours instead of days. It’s a big productivity win to declare instead of implement from scratch each time.
  • Performance: We were careful to ensure this flexibility didn’t come at a huge performance cost. Thanks to caching, debouncing, and asynchronous processing, the overhead on the web requests is minimal. The heavy lifting (AI generation) happens offline. We measured the impact on the app server and found it negligible – no noticeable change in response times, and only a small bump in background job activity, well within our capacity. Essentially, we’ve added a lot of “smarts” to the system without slowing it down.
  • Maintainability: The DSL approach also aids maintainability. The rules for dynamic content are all in one place (in the model file, in a human-readable form). If someone is wondering “why does this text update when I change X?”, they can inspect the model’s ai_textable declaration and see the triggers listed. It’s much easier than chasing through callback method definitions scattered around.

Conclusion

Building a self-updating AI content system in Rails was an exciting challenge. We ended up pushing Rails beyond its usual use-case – essentially creating a mini reactive rules engine inside a typical web app. By leveraging Rails’ reflection and callback facilities in a creative way, we achieved a kind of data-driven content automation that wasn’t available out of the box.

For Rails developers, this approach shows how powerful the framework’s meta-programming capabilities can be. We defined a DSL that feels almost like a natural extension of Rails (similar in spirit to how you declare associations or validations), but under the hood it sets up a sophisticated network of listeners and background jobs to keep content fresh. And by tying in an AI text generator, we ensure the actual writing of the content is handled by machine intelligence, guided by our domain rules.

This project taught us a lot about balancing declarative design with practical concerns like performance and edge-case handling. It was a deep dive into Rails internals (callbacks, class methods, thread-local storage, etc.), and the result is a system that elevates our platform’s capabilities. Now our application isn’t just responding to user inputs, it’s proactively keeping its own content up-to-date – a step towards a smarter, more autonomous system.

We hope this deep dive was interesting! If you’re thinking of building something similar or have questions about the approach, feel free to reach out. Building internal DSLs and leveraging frameworks in novel ways can be challenging, but it’s incredibly rewarding when it pays off in cleaner code and better user experiences.