What Is a Unix Timestamp?
How epoch time works, how to read and convert Unix timestamps in JavaScript, Python, and SQL, and the common pitfalls — including the Y2038 problem — that catch developers and analysts off guard.
The Basics: Counting Seconds Since 1970
A Unix timestamp is a single number that represents a specific moment in time. It counts the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC — a moment known as the Unix epoch.
For example, the timestamp 1719619200 represents June 29, 2024 at midnight UTC. The timestamp 0 represents the epoch itself: the very start of January 1, 1970. Negative numbers represent dates before the epoch — -86400 is December 31, 1969.
This system was introduced in early Unix operating systems in the 1970s, and it stuck. Today, Unix timestamps are used across virtually every programming language, database, API, logging system, and analytics platform. When a server records that something happened at 1706745600, it means February 1, 2024 at midnight UTC — no timezone ambiguity, no format confusion, just a number.
Seconds vs. Milliseconds: The 10-Digit / 13-Digit Rule
There are two common variants of Unix timestamps, and mixing them up is one of the most frequent date bugs in software:
Seconds (10 digits) — The traditional format. A number like 1719619200 is 10 digits and represents seconds since the epoch. This is what most Unix/Linux tools, databases, and backend systems use.
Milliseconds (13 digits) — The JavaScript-era format. A number like 1719619200000 is 13 digits and represents milliseconds since the epoch. JavaScript's Date.now() returns this format. Java's System.currentTimeMillis() does too. Many frontend analytics platforms and APIs use milliseconds.
The quick rule: count the digits. Ten digits means seconds. Thirteen digits means milliseconds. If you accidentally interpret a millisecond timestamp as seconds, you'll get a date roughly 50,000 years in the future. If you interpret seconds as milliseconds, you'll get a date in January 1970 — a few days after the epoch.
Some systems use microseconds (16 digits) or nanoseconds (19 digits), though these are less common outside of high-frequency trading and kernel-level programming.
Why Timestamps Matter
Unix timestamps solve a problem that human-readable dates create: ambiguity. Consider the string 01/02/2024. Is that January 2nd (US convention) or February 1st (European convention)? There's no way to know without context. Now consider 03:00 — is that 3 AM or 3 PM? And in which timezone?
A Unix timestamp eliminates all of this. The number 1706745600 means exactly one thing: a specific second in time, anchored to UTC. Any system anywhere in the world can convert it to local time without ambiguity. This is why timestamps are the preferred format for storing dates in databases, logging events, and transmitting dates between systems.
Timestamps are also trivially sortable and comparable. Want to know which event happened first? Compare two integers. Want to calculate the duration between two events? Subtract one from the other. No parsing, no timezone math, just arithmetic.
Converting Timestamps in Code
JavaScript
JavaScript's Date object works natively with millisecond timestamps:
// Current timestamp in milliseconds
Date.now() // → 1719619200000
// Current timestamp in seconds
Math.floor(Date.now() / 1000) // → 1719619200
// Timestamp → human-readable
new Date(1719619200 * 1000) // → Sat Jun 29 2024 00:00:00 GMT+0000
// Human-readable → timestamp (seconds)
Math.floor(new Date('2024-06-29').getTime() / 1000) // → 1719619200 A common mistake: passing a seconds timestamp directly to new Date() without multiplying by 1000. JavaScript expects milliseconds, so new Date(1719619200) gives you January 20, 1970 — not June 2024.
Python
Python's datetime module handles both directions:
import datetime, time
# Current timestamp in seconds
time.time() # → 1719619200.0
# Timestamp → datetime (UTC)
datetime.datetime.utcfromtimestamp(1719619200) # → 2024-06-29 00:00:00
# Timestamp → datetime (timezone-aware, Python 3.11+)
datetime.datetime.fromtimestamp(1719619200, tz=datetime.timezone.utc)
# datetime → timestamp
dt = datetime.datetime(2024, 6, 29, tzinfo=datetime.timezone.utc)
dt.timestamp() # → 1719619200.0 Note: utcfromtimestamp() returns a naive datetime (no timezone info attached). In Python 3.12+, it's deprecated in favor of the timezone-aware version. If you're writing new code, always use the tz= parameter.
SQL (MySQL, PostgreSQL, SQLite)
-- MySQL: timestamp → datetime
SELECT FROM_UNIXTIME(1719619200); -- → '2024-06-29 00:00:00'
-- MySQL: datetime → timestamp
SELECT UNIX_TIMESTAMP('2024-06-29 00:00:00'); -- → 1719619200
-- PostgreSQL: timestamp → datetime
SELECT TO_TIMESTAMP(1719619200); -- → '2024-06-29 00:00:00+00'
-- PostgreSQL: datetime → timestamp
SELECT EXTRACT(EPOCH FROM TIMESTAMP '2024-06-29 00:00:00 UTC');
-- SQLite: timestamp → datetime
SELECT datetime(1719619200, 'unixepoch'); -- → '2024-06-29 00:00:00'
-- SQLite: datetime → timestamp
SELECT strftime('%s', '2024-06-29 00:00:00'); Command Line (Bash / macOS)
# Linux: timestamp → readable
date -d @1719619200 # → Sat Jun 29 00:00:00 UTC 2024
# macOS: timestamp → readable
date -r 1719619200 # → Sat Jun 29 00:00:00 UTC 2024
# Current timestamp
date +%s # → 1719619200 Common Pitfalls
The Year 2038 Problem (Y2K38)
Many older systems store Unix timestamps as a signed 32-bit integer, which maxes out at 2,147,483,647. That number corresponds to January 19, 2038 at 03:14:07 UTC. One second later, the counter overflows and wraps to a large negative number — interpreted as December 13, 1901.
This is the "Y2038 problem," and it's real. Embedded systems, IoT devices, legacy databases, and older C programs that use 32-bit time_t are vulnerable. Modern 64-bit systems use a 64-bit timestamp that won't overflow for roughly 292 billion years, so if you're on a current OS and language runtime, you're fine. But if you're working with data from legacy systems or firmware, be aware that timestamps near 2038 may behave unexpectedly.
Timezone Confusion
Unix timestamps are always UTC. Always. There is no such thing as a "Unix timestamp in Eastern Time." If someone gives you a timestamp and says it's "in EST," one of two things happened: either they converted a local time to a timestamp without accounting for the UTC offset (a bug), or they're using the term loosely.
When converting a timestamp to a human-readable date, you choose which timezone to display it in. The timestamp itself doesn't change — only the representation does.
Leap Seconds
Unix time does not count leap seconds. The POSIX standard defines each day as exactly 86,400 seconds, even though astronomical days occasionally include a leap second (an extra second inserted to keep UTC aligned with Earth's rotation). In practice, most systems handle leap seconds by "smearing" them — distributing the extra second across a longer period. For almost all applications, this is invisible and irrelevant. But if you're doing sub-second precision work in astronomy or metrology, be aware of it.
Negative Timestamps
Dates before January 1, 1970 are represented as negative timestamps. -86400 is December 31, 1969. Not all systems handle negative timestamps correctly — some older JavaScript engines, some database functions, and some APIs will reject or misparse them. If you're working with historical dates (birthdates, historical records), test your stack with negative values.
When Not to Use Unix Timestamps
Timestamps are ideal for storing and comparing instants in time. But they're not always the best choice:
Calendar dates without times — If you're storing a birthday (March 15, 1990) or a holiday (December 25), a timestamp adds false precision. The date doesn't have a time component, and converting it to a timestamp forces you to pick a time (usually midnight) and a timezone. Store these as date strings or date-only types.
Recurring events — "Every Tuesday at 3 PM Eastern" can't be represented as a single timestamp. You need a timezone-aware recurrence rule (like RFC 5545 / iCalendar format).
User-facing display — Showing 1719619200 to an end user is meaningless. Always convert to a human-readable format for display, and always include the timezone so the reader knows the frame of reference.
Quick Reference
| Timestamp | Date (UTC) | Notes |
|---|---|---|
0 | Jan 1, 1970 00:00:00 | The Unix epoch |
86400 | Jan 2, 1970 00:00:00 | Exactly 1 day (86,400 seconds) |
946684800 | Jan 1, 2000 00:00:00 | Y2K / millennium |
1000000000 | Sep 9, 2001 01:46:40 | The billennium |
1704067200 | Jan 1, 2024 00:00:00 | Start of 2024 |
2147483647 | Jan 19, 2038 03:14:07 | 32-bit overflow (Y2038) |
-86400 | Dec 31, 1969 00:00:00 | One day before epoch |
Related Guides
Unix timestamps are just one piece of the date puzzle. For the standard that builds on top of them, see ISO 8601 Explained. For a comparison of all common date formats side by side, see the Date Format Cheat Sheet. If you need to convert an entire column of timestamps in a spreadsheet, see How to Convert Dates in CSV Files. For the quirks of parsing timestamps in JavaScript specifically, see JavaScript Date Parsing Pitfalls. To understand how databases store timestamps internally, see How Databases Store Dates.
Try It Yourself
Convert any date or timestamp instantly — free, no sign-up required.
Open the Converter