I'm migrating a project from PostgreSQL to SQLite. There's nothing wrong with PostgreSQL, It's the best RDBMS; period. I'm just building a product where the benefits of the portability, performance seem to outweigh the quirks of SQLite; At-least that's what I think.
Btw, I learnt couple of days back that SQLite is not 'SQL Light' but S.Q.L.-ite although the developer Richard Hipp doesn't seem to mind how its pronounced. I've used SQLite plenty of times during my mobile app development days and never bothered to check the pronunciation.
My first major concern was concurrency. SQLite was not really developed for a multi-user environment with thousands or even millions of concurrent users like in a web application.
Although the product An on-premise gumroad alternative I'm building wouldn't have massive number of concurrent users since it will be mostly used by my customers to sell their digital files and subscriptions; It will surely need to handle concurrent users.
My application logic is in Go, And like any Go developer I've been spoiled by the zero effort concurrency. The Go SQLite driver ( go-sqlite3 by Mattn) mentions explicitly that concurrency is possible for reads but not writing. So I performed tests to verify the same and to come up with mitigation strategies.
I knew there are people who have overcome these issues and are running SQLite in a production environment. Thomas H. Ptacek mentioned on HN that his M.O "is to have a single writer thread with a channel of closures to apply transactions, and a shared db handle for reads."
My tests showed that the SQLite db does indeed lock if reads and writes are performed using Go routines. Limiting the connection pool to 1,
db.SetMaxOpenConns(1) seems to help a bit. Implementing Readers-writer lock pattern using Go's
RWMutex seems to resolve the issue even when using Go routines to read and write. I added
_journal_mode=WAL for better performance.
Now for the actual implementation in the project, The first hurdle I'm facing are with data types and constraints when migrating from the PgSQL.
SERIAL for column id had to be replaced with
AUTOINCREMENT; SQLite website warns of potential performance drawback and recommends row_id; But row_id seems to reassign the integer and that I don't wish to change my application logic if not strictly necessary.
ILIKE had to be replaced with
LIKE as the
LIKE in SQLite is case insensitive by default. I was able to create an user account with just these changes in my Application, But there seems to be some format error with Timestamps as the data type is not present in SQLite.
The issue was with storing the date as Timestamp data type in SQLite, As SQLite doesn't have rigid data types we can store data types which aren't present but it causes issue when retrieving. I changed the
text and included a routine to parse the text to time.
I changed the columns with data types integer, JSONB to
text as well, I'm planning to store data as JSON string in them. Queries which had
now() to set date was changed to include date as query parameter, Could have used built-in
datetime() in SQLite but I chose to set it in the code.
Similarly queries with
power() had to be removed as math functions require compiling SQLite with
SQLITE_ENABLE_MATH_FUNCTIONS using Go build tags and I currently didn't have a hard requirements for the match functions in the project.
go-sqlite3 doesn't support multiple statements in the
SQLDB.Prepare() which is very useful for creating tables via .sql file during the first run. I resorted to splitting the statements before using
It seems that SQLite itself doesn't support multiple statements, But the some drivers like modernc seems to take care of it automatically.
This post will be updated as I discover more PostgreSQL to SQLite migration quirks.
I strive to write low frequency, High quality content on Health, Product Development, Programming, Software Engineering, DIY, Security, Philosophy and other interests. If you would like to receive them in your email inbox then please consider subscribing to my Newsletter.