Bullet is an excellent gem for catching N+1 queries. However, a significant portion of the performance issues we encounter in production fly completely under Bullet’s radar. In this post, we’ll explore the performance killers that Bullet can’t detect and how to identify and fix them.
Table of Contents
- Bullet’s Limitations
- Hidden N+1s in Custom Methods
- Serializer-Induced Query Explosions
- The Count vs Size vs Length Trap
- Silent Killers in Callbacks
- Select vs Pluck: Memory and Performance Trade-offs
- Finding Real Issues with EXPLAIN ANALYZE
- Index Strategies
- Tools and Best Practices
1. Bullet’s Limitations
Bullet (currently v8.0.8 as of 2025) detects eager loading issues through ActiveRecord association hooks. However, it remains silent in these scenarios:
- Queries inside custom methods (
.where,.exists?,.find_by) - Relationships defined in serializers without AR association hooks
- Conditional queries that don’t go through standard association loading
- Database operations inside callbacks
- View and controller tests (when views aren’t rendered)
- Background jobs (requires special configuration)
# Bullet WILL NOT CATCH THIS
class User < ApplicationRecord
has_many :teams
def supports?(team_name)
teams.where(name: team_name).exists?
end
end
# In the view - separate query for each user
<% @users.each do |user| %>
<li class="<%= user.supports?("Lakers") ? "purple" : "" %>">
<%= user.name %>
</li>
<% end %>
This code generates 101 queries for 100 users, and Bullet gives no warning.
Why This Happens
Bullet hooks into ActiveRecord’s association loading mechanism. When you call user.teams, it tracks whether the association was preloaded. But when you call teams.where(...), you’re creating a new query scope that bypasses the association tracking entirely.
2. Hidden N+1s in Custom Methods
The Problem
class Order < ApplicationRecord
belongs_to :user
has_many :line_items
def total_with_discount
# Separate query for each order
user.loyalty_tier.discount_rate * line_items.sum(:price)
end
end
# Controller
def index
@orders = Order.includes(:line_items).limit(50)
end
# View - Bullet is silent, but user and loyalty_tier cause N+1
<% @orders.each do |order| %>
<td><%= order.total_with_discount %></td>
<% end %>
Solution 1: Expand Eager Loading
def index
@orders = Order
.includes(:line_items, user: :loyalty_tier)
.limit(50)
end
Solution 2: Preload and Pass Data
class Order < ApplicationRecord
def total_with_discount(discount_rate = nil)
rate = discount_rate || user.loyalty_tier.discount_rate
rate * line_items.sum(:price)
end
end
# Controller - preload separately, pass to view
def index
@orders = Order.includes(:line_items).limit(50)
user_ids = @orders.map(&:user_id).uniq
@discount_rates = User
.joins(:loyalty_tier)
.where(id: user_ids)
.pluck(:id, "loyalty_tiers.discount_rate")
.to_h
end
Solution 3: Use Prosopite for Detection
Prosopite is an alternative gem that detects N+1s by counting actual SQL queries rather than tracking associations:
# Gemfile
gem 'prosopite'
# config/environments/development.rb
config.after_initialize do
Prosopite.rails_logger = true
Prosopite.raise = true
end
Prosopite will catch the supports? method N+1 that Bullet misses.
3. Serializer-Induced Query Explosions
ActiveModel::Serializers and similar gems automatically call associations, and these often escape Bullet’s detection.
The Problem
class PostSerializer < ActiveModel::Serializer
attributes :id, :title, :author_name, :comments_count
def author_name
object.user.name # N+1
end
def comments_count
object.comments.count # N+1 (should use counter_cache)
end
end
# Controller
def index
posts = Post.all
render json: posts # 2 extra queries per post
end
Solution 1: Eager Loading in Controller
def index
posts = Post.includes(:user, :comments)
render json: posts
end
Solution 2: Check if Association is Loaded
class PostSerializer < ActiveModel::Serializer
attributes :id, :title, :author_name
belongs_to :user, if: -> { object.association(:user).loaded? }
def author_name
return nil unless object.association(:user).loaded?
object.user.name
end
end
Solution 3: BatchLoader for Lazy Loading
# Gemfile
gem 'batch-loader'
class Post < ApplicationRecord
def author_lazily
BatchLoader.for(user_id).batch do |user_ids, loader|
User.where(id: user_ids).each { |user| loader.call(user.id, user) }
end
end
end
BatchLoader collects all IDs during serialization and executes a single query at the end.
4. The Count vs Size vs Length Trap
These three methods return the same result but have vastly different performance characteristics.
Comparison
| Method | Behavior | Query |
|---|---|---|
count |
Always executes COUNT query | SELECT COUNT(*) FROM... |
length |
Loads collection, counts in Ruby | SELECT * FROM... |
size |
Smart: uses length if loaded, count otherwise | Depends on context |
The Problem
# We eager loaded with includes
@users = User.includes(:posts).where(active: true)
@users.each do |user|
# WRONG: Despite includes, executes COUNT query for each user
puts "#{user.name}: #{user.posts.count} posts"
end
The Solution
@users.each do |user|
# CORRECT: Counts the loaded collection in Ruby
puts "#{user.name}: #{user.posts.size} posts"
end
When to Use Counter Cache
For frequently accessed counts, counter_cache is the best solution:
# Migration
add_column :users, :posts_count, :integer, default: 0, null: false
# Model
class Post < ApplicationRecord
belongs_to :user, counter_cache: true
end
# Backfill existing data
User.find_each do |user|
User.reset_counters(user.id, :posts)
end
Conditional Counter Cache with counter_culture
# Gemfile
gem 'counter_culture'
class Order < ApplicationRecord
belongs_to :customer
counter_culture :customer # orders_count
counter_culture :customer,
column_name: proc { |order| order.cancelled? ? 'cancelled_orders_count' : nil }
end
5. Silent Killers in Callbacks
Database operations inside callbacks can become performance killers, especially during bulk operations.
The Problem
class Article < ApplicationRecord
after_save :update_search_index
after_save :notify_subscribers
after_save :recalculate_stats
private
def update_search_index
SearchIndex.update(self) # External API call
end
def notify_subscribers
subscribers.each do |sub| # N+1 potential
NotificationMailer.new_article(sub, self).deliver_later
end
end
def recalculate_stats
author.articles.published.count # Query on every save
category.update_article_count! # Update on every save
end
end
# Disaster during bulk import
Article.import(articles_data) # 1000 articles = 3000+ callbacks
Solution 1: Make Callbacks Conditional
class Article < ApplicationRecord
attr_accessor :skip_callbacks
after_save :update_search_index, unless: :skip_callbacks
after_save :notify_subscribers, unless: :skip_callbacks
end
# Bulk import
Article.transaction do
articles_data.each do |data|
Article.create!(data.merge(skip_callbacks: true))
end
end
# Run post-import operations in batch
Article.where(id: imported_ids).find_each(&:update_search_index)
Solution 2: Use after_commit
class Article < ApplicationRecord
# Runs after transaction commits
# Safe for Sidekiq jobs
after_commit :notify_subscribers_async, on: :create
private
def notify_subscribers_async
NotifySubscribersJob.perform_later(id)
end
end
Solution 3: Suppressor for Temporary Bypass
# Rails 5+
Notification.suppress do
User.create!(name: "Jane") # Notification callback won't run
end
6. Select vs Pluck: Memory and Performance Trade-offs
Choosing the right method is critical when working with large datasets.
Benchmark Comparison (10,000 records)
# SLOWEST: Load full objects, then map
User.all.map(&:id)
# ~270ms, high memory
# MEDIUM: Select only id, but still creates AR objects
User.select(:id).map(&:id)
# ~100ms, medium memory
# FASTEST: Returns array directly, no AR objects
User.pluck(:id)
# ~15ms, low memory
When to Use Which?
# PLUCK: When you only need values
user_ids = User.active.pluck(:id)
emails = User.where(role: :admin).pluck(:email)
# SELECT: When you need AR methods
users = User.select(:id, :name, :email)
users.each { |u| puts u.full_display_name } # AR method call
# PLUCK + SUBQUERY: Performant IN clause
Post.where(user_id: User.active.select(:id))
# Single query: SELECT * FROM posts WHERE user_id IN (SELECT id FROM users WHERE...)
# WRONG: pluck with subquery
Post.where(user_id: User.active.pluck(:id))
# Two queries + array held in memory
Lazy Enumeration for Large Datasets
# Memory-friendly iteration
User.select(:id, :email).find_each(batch_size: 1000) do |user|
process(user)
end
# Even more efficient with postgresql_cursor gem
User.select(:id, :email).each_row do |row|
process(row)
end
7. Finding Real Issues with EXPLAIN ANALYZE
Bullet and similar tools catch N+1s, but the real performance problem is sometimes hidden in a single slow query.
Using EXPLAIN in Rails
# Basic explain
User.where(email: "test@example.com").explain
# Detailed analysis (actually executes the query)
User.where(email: "test@example.com").explain(:analyze)
# More detailed with activerecord-analyze gem
User.where(email: "test@example.com").analyze(
format: :json,
buffers: true,
timing: true
)
Reading EXPLAIN Output
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
-- BAD: Sequential Scan (scans entire table)
Seq Scan on users (cost=0.00..1234.00 rows=1 width=244)
(actual time=45.123..89.456 rows=1 loops=1)
Filter: (email = 'test@example.com')
Rows Removed by Filter: 99999
-- GOOD: Index Scan
Index Scan using index_users_on_email on users
(cost=0.42..8.44 rows=1 width=244)
(actual time=0.026..0.027 rows=1 loops=1)
Index Cond: (email = 'test@example.com')
Red Flags to Watch For
| Situation | Meaning | Solution |
|---|---|---|
Seq Scan on large table |
Index not being used | Add index |
High Rows Removed by Filter |
Too much data being filtered | Optimize WHERE clause |
Sort with high cost |
In-memory sorting | Add index with order |
Hash Join with high cost |
Join not optimized | Add index on FK |
Nested Loop with many loops |
N+1-like situation | Convert to batch query |
PgHero and rails-pg-extras
# Gemfile
gem 'pghero'
gem 'rails-pg-extras'
# Find slow queries
PgHero.slow_queries
# Unused indexes
PgExtras.unused_indexes
# Index hit rate (below 95% indicates problems)
PgExtras.index_usage
8. Index Strategies
Compound Index Column Order
# Migration
add_index :orders, [:user_id, :status, :created_at]
# WORKS (left-to-right matching)
Order.where(user_id: 1)
Order.where(user_id: 1, status: 'pending')
Order.where(user_id: 1, status: 'pending').order(created_at: :desc)
# DOESN'T WORK (user_id skipped)
Order.where(status: 'pending')
Partial Index
# Index only active records
add_index :users, :email,
where: "deleted_at IS NULL",
name: "index_active_users_on_email"
# Index only specific statuses
add_index :orders, :created_at,
where: "status = 'pending'",
name: "index_pending_orders_on_created_at"
Covering Index (Index-Only Scan)
# If all columns in SELECT are in the index,
# PostgreSQL doesn't need to access the table
add_index :users, [:email, :name, :role]
# This query is answered entirely from the index
User.where(email: "x@y.com").select(:name, :role)
Expression Index
# Index for LOWER()
execute "CREATE INDEX index_users_on_lower_email ON users (LOWER(email))"
# Query must use the same expression
User.where("LOWER(email) = ?", email.downcase)
GIN Index (Full-text and JSONB)
# For JSONB columns
add_index :products, :metadata, using: :gin
# For array columns
add_index :articles, :tags, using: :gin
# For full-text search
execute <<-SQL
CREATE INDEX index_articles_on_search ON articles
USING gin(to_tsvector('english', title || ' ' || content))
SQL
9. Tools and Best Practices
Development Tools
# Gemfile (development group)
group :development do
gem 'bullet' # N+1 detection (associations)
gem 'prosopite' # N+1 detection (query counting)
gem 'rack-mini-profiler' # Request profiling
gem 'memory_profiler' # Memory allocation
gem 'activerecord-analyze' # EXPLAIN ANALYZE
gem 'rails-pg-extras' # PostgreSQL insights
end
N+1 Detection in Tests
# Using n_plus_one_control gem
# spec/rails_helper.rb
require 'n_plus_one_control/rspec'
RSpec.describe UsersController, type: :request do
context 'N+1 detection' do
populate { |n| create_list(:user, n, :with_posts) }
it 'does not have N+1 queries' do
expect { get '/users' }.to perform_constant_number_of_queries
end
end
end
Production Monitoring
# Gemfile
gem 'skylight' # Performance monitoring
gem 'scout_apm' # Alternative
gem 'newrelic_rpm' # Alternative
# Custom slow query logging
# config/initializers/slow_query_logger.rb
ActiveSupport::Notifications.subscribe('sql.active_record') do |*, payload|
duration = payload[:duration]
if duration > 100 # Over 100ms
Rails.logger.warn "[SLOW QUERY] #{duration.round(2)}ms: #{payload[:sql]}"
end
end
CI Pipeline Query Control
# .github/workflows/test.yml
- name: Run tests with Bullet
env:
BULLET_ENABLED: true
run: bundle exec rspec
# spec/rails_helper.rb
if ENV['BULLET_ENABLED']
Bullet.enable = true
Bullet.raise = true # Fail CI on N+1
end
Conclusion
Bullet is a fantastic tool, but it only catches the tip of the iceberg. For real performance optimization:
- Watch for queries in custom methods - Bullet can’t see these
- Audit your serializers - Every attribute could be a query
- Know the count/size/length difference - Wrong usage causes N+1
- Minimize callbacks - They’re disasters in bulk operations
- Use EXPLAIN ANALYZE - Find the real bottleneck
- Apply proper index strategies - Partial, compound, covering indexes
- Monitor production - Skylight, NewRelic, custom logging
- Consider Prosopite - Catches what Bullet misses
The key insight: Bullet tracks association loading, not query execution. Any code path that generates queries without going through standard association loading will be invisible to Bullet.
