Field Notes

Building Offline Sync for a Lightweight POS with Expo, TinyDB, and Supabase

Sep '25

I’ve always been fascinated by local-first software — apps that don’t break when the internet does.

After years working around logistics and FMCG operations in emerging markets, I noticed a repeating pain point: the “last 10 meters” of tech adoption often fails because connectivity is unreliable. Corner shops, dispatch operators, and field agents all need tools that just work, even when offline.

So I built a small POS (Point-of-Sale) app — not as a startup idea, but as a learning project to sharpen my skills in building local-first mobile systems. It’s a lightweight Expo app powered by TinyDB, React Query, and Supabase, designed for small merchants to track daily sales and sync data once they’re back online.

Why I didn’t start with SQLite

SQLite is great — it’s stable, structured, and the backbone of many serious offline-first apps. But in early prototypes, it felt too heavy. The goal was to move fast, understand the local-first architecture, and ship something I could test in the real world.

I used TinyDB instead — a small async key-value wrapper that mimics the simplicity of AsyncStorage, but allows batching and quick JSON persistence.

import { TinyDB } from 'tinydb-expo';

const db = new TinyDB('pos');

await db.setItem('sales', JSON.stringify(salesData));
const saved = JSON.parse(await db.getItem('sales'));

It’s dead simple — no schemas, no setup, no native linking headaches. Perfect for early learning and iteration. Later, once I hit performance limits (JSON size, no indexes, etc.), it became obvious I’d migrate to SQLite for more structured storage. But by then, I had already validated most of the sync logic, which was the real goal.

The offline-first loop

The POS app runs with a three-layered data model:

  • UI state — handled by Zustand for instant reactivity
  • Persistent local store — TinyDB, keeping sales, inventory, and users cached
  • Remote sync layer — Supabase, managing online persistence once connected

The idea was: “The user should never wait for a network request to complete before moving on.”

When a sale happens it’s added instantly to local TinyDB, a background job queues it for sync, and once the network is back, the app automatically flushes the queue.

import NetInfo from '@react-native-community/netinfo';
import { useMutation } from '@tanstack/react-query';
import { TinyDB } from 'tinydb-expo';

const db = new TinyDB('pos');

const syncSale = async (sale) => {
  await supabase.from('sales').insert(sale);
};

export const useOfflineSync = () => {
  const mutation = useMutation(syncSale);

  const flushQueue = async () => {
    const pending = JSON.parse(await db.getItem('pending_sales')) || [];
    for (const sale of pending) {
      await mutation.mutateAsync(sale);
    }
    await db.setItem('pending_sales', JSON.stringify([]));
  };

  NetInfo.addEventListener(async (state) => {
    if (state.isConnected) flushQueue();
  });

  return { flushQueue };
};

This worked surprisingly well. Even after multiple crashes or reloads, pending sales were safely queued and synced as soon as connectivity returned.

Handling conflicts & reliability

With offline data, conflict resolution becomes the elephant in the room. For a small POS, I went pragmatic: last write wins. Each sale had a unique UUID via nanoid(), and if Supabase rejected a duplicate I simply skipped it.

For real-world products, I’d move to:

  • Server-timestamped records to resolve order conflicts
  • Versioned objects to handle updates gracefully
  • A background worker to retry failed syncs with exponential backoff

But for my “learning by doing” phase, simple beat perfect.

Why React Query ties it all together

React Query gave me exactly what I needed: a declarative, cache-first way to handle server sync without reinventing the wheel.

const { data: sales, refetch } = useQuery({
  queryKey: ['sales'],
  queryFn: async () => {
    const { data } = await supabase.from('sales').select('*');
    await db.setItem('sales', JSON.stringify(data));
    return data;
  },
  staleTime: 1000 * 60 * 10,
});

This meant I could refetch manually, hydrate from TinyDB at launch, and still leverage all the caching and invalidation power of React Query — even without a network.

Lessons learned

  1. TinyDB is great for rapid prototypes — it keeps you focused on UX and logic instead of schema design.
  2. SQLite is inevitable — once your data grows or relationships appear, you’ll want its structure and indexing.
  3. React Query plus local storage can form a powerful hybrid cache system.
  4. Network assumptions kill adoption — local-first UX isn’t optional in low-connectivity markets.
  5. You don’t need a big backend — Supabase plus good sync logic covers most small-team needs.

What’s next

The next version of this POS will migrate to SQLite (via expo-sqlite or drizzle-orm) to handle more structured data. I also plan to introduce background tasks for automatic queue flushing and analytics tracking.

This experiment wasn’t just about building a POS — it was about proving that local-first can be lightweight, modern, and developer-friendly with today’s Expo ecosystem. Once you get the sync model right, you start to see every app as a potential local-first system waiting to happen.