What Is a Unix Timestamp? A Complete Guide for Everyone
Unix timestamps power almost every computer, database, and API on the planet. Here is what they are, how they work, why they start in 1970, and the famous 2038 problem they cause.
The 60-Second Definition
A Unix timestamp — also called Unix time, Epoch time, or POSIX time — is simply the number of seconds that have passed since January 1, 1970 at exactly 00:00:00 UTC. That moment is called "the Unix epoch." Right now, as you read this, the Unix timestamp is roughly 1,776,000,000 (give or take). Tomorrow it will be 86,400 higher (because there are 86,400 seconds in a day). It is one continuous, ever-increasing number that represents "this exact moment in time" in a format that every computer in the world understands. There are no time zones in Unix time. There are no months or years. Just a count of seconds, ticking forward forever.
Why January 1, 1970?
The Unix epoch — January 1, 1970 — was chosen for one practical reason: it was a recent, round date close to when the Unix operating system was being developed at Bell Labs. The original Unix designers needed a starting point to count seconds from, and they wanted something that was (a) recent enough to keep the numbers small, (b) easy to remember, and (c) before any software they would write needed to refer to. They picked the start of the previous decade. There is no deeper meaning. It was not chosen because of any particular event. It is just an arbitrary reference point that everyone in the computing world agreed to use — and that agreement has held for over 50 years.
How Computers Use Unix Time Internally
Almost every computer system stores time as a Unix timestamp internally, even when it shows you a human-readable date on screen. When you save a file, the operating system records its modification time as a Unix timestamp. When a database stores when a row was created, it stores a Unix timestamp. When an API sends back a "created_at" field, it is usually a Unix timestamp. The reason is simple: Unix timestamps are unambiguous, easy to compare (just subtract two numbers), easy to sort, and easy to store in a fixed-size integer field. Date strings like "March 15, 2026" are ambiguous (which month is March 03/04 or 04/03?), inconsistent across languages, and hard to compute on. A simple integer is none of those things.
Why Unix Time Avoids the Time Zone Problem
Unix time has no time zones. The number 1,776,000,000 means the same exact moment in time everywhere on Earth. A computer in Tokyo and a computer in Los Angeles both see "now" as the same Unix timestamp at the same instant, even though their wall clocks show times that differ by 16 hours. This is hugely useful for distributed systems. When two servers in different cities log an event, they both write the same Unix timestamp — and you can later compare them to see which happened first, no matter where they were. Time zones only enter the picture when humans need to read the time. To display a Unix timestamp in human-readable form, the system applies the user's local time zone at the last moment before showing it.
The 2038 Problem (Why 32-Bit Unix Time Will Overflow)
There is a famous bug waiting to happen on January 19, 2038 at 03:14:07 UTC. At that exact moment, the Unix timestamp will reach 2,147,483,647 — the largest number a 32-bit signed integer can hold. Any system that stores Unix time as a 32-bit signed integer (which was standard for decades) will roll over to a negative number, effectively jumping back to 1901. This is called the "Year 2038 problem" or sometimes "Y2K38." It will affect old computers, old embedded systems (industrial controllers, smart meters, medical devices, traffic systems), and any software that has not been updated to use 64-bit integers. Most modern operating systems and languages have already switched to 64-bit Unix time, which can hold dates up to roughly 292 billion years from now — longer than the universe has existed. But the cleanup of legacy 32-bit systems is still ongoing, and is expected to cause real problems in 2038.
How to Convert Between Unix Time and Human Time
Almost every programming language has a built-in function to convert between Unix timestamps and human-readable dates. In JavaScript: new Date(unixTimestamp * 1000) gives you a Date object. In Python: datetime.fromtimestamp(unixTimestamp) returns a datetime. In SQL: FROM_UNIXTIME(timestamp) in MySQL or to_timestamp(timestamp) in PostgreSQL. In Bash: date -d @1776000000 prints the human-readable date. To go the other direction, every language has an "as of now" function that returns the current Unix timestamp: Date.now() / 1000 in JavaScript, time.time() in Python, NOW() in MySQL, and so on. For occasional manual conversion, online tools like epochconverter.com or simply a Google search for "1776000000 unix" will give you the date.
Common Use Cases for Unix Timestamps
Unix timestamps are everywhere in computing. Examples: file modification times in every operating system; database row creation timestamps; API response fields like "created_at" and "updated_at"; HTTP cookie expiry dates; JWT token expiration; cryptocurrency block timestamps; email message headers; log file entries on every server in the world. Every time you see a timestamp displayed on a website ("posted 3 hours ago", "last seen 2 days ago"), the underlying value is almost certainly a Unix timestamp that was converted to relative human language for display. The choice to use Unix time is so universal that programmers rarely think about it — it is just the default.
Milliseconds vs Seconds (and Why Languages Disagree)
There is one annoying inconsistency in how different systems store Unix time: some use seconds, others use milliseconds. The original Unix definition is "seconds since epoch," but many modern systems (especially JavaScript) use "milliseconds since epoch" because they need finer time resolution. As a result, you might see Unix timestamps that look like 1,776,000,000 (seconds) or 1,776,000,000,000 (milliseconds). The way to tell them apart: a current-day Unix timestamp in seconds has 10 digits; in milliseconds it has 13 digits. If you accidentally treat a milliseconds value as seconds, your dates will appear to be in the year 58,210 — not the year 2026. Always check which unit a system is using before doing math.
Why Unix Time Excludes Leap Seconds
Unix time has a small flaw: it does not account for leap seconds. Leap seconds are extra seconds occasionally added to UTC (about every 1–2 years) to keep atomic time aligned with the Earth's slightly irregular rotation. UTC has accumulated 27 leap seconds since 1972. Unix time, however, was defined to assume every day has exactly 86,400 seconds — no exceptions. This means that across a leap second, the Unix timestamp pretends nothing special happened. The exact moment of a leap second is technically "missing" from Unix time. For 99.99% of applications, this does not matter. For high-precision scientific or astronomical applications, it does. The leap second system is being phased out by 2035, after which Unix time will be exact forever (until 2038).
Unix Time in Different Programming Languages
Quick reference for getting the current Unix timestamp in popular languages: JavaScript: Math.floor(Date.now() / 1000) for seconds, Date.now() for milliseconds. Python: int(time.time()) for seconds (requires "import time"). Or int(datetime.now().timestamp()). Java: System.currentTimeMillis() / 1000 for seconds. C: time(NULL) returns time_t (usually seconds, machine-dependent). Go: time.Now().Unix() for seconds. Rust: SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs(). PHP: time() for seconds. Ruby: Time.now.to_i for seconds. SQL (MySQL): UNIX_TIMESTAMP(). Postgres: EXTRACT(EPOCH FROM NOW()). Bash: date +%s.
When NOT to Use Unix Time
Unix time is great for storing and comparing exact moments, but it is the wrong choice in some situations. Do not use Unix time for: dates without a time component (use a date type like 2026-04-15 instead); recurring schedules (use a calendar with timezone awareness); times before 1970 (Unix time can technically be negative but most systems do not support it well); user-facing time strings (always convert to human-readable format with timezone applied); communication with non-technical people (nobody talks about "1,776,000,000"). The general rule: store Unix time, display human-readable time. Never display raw Unix timestamps to end users.
Tools for Working With Unix Timestamps
For occasional Unix timestamp work, several tools are useful. Online converters like epochconverter.com let you paste a Unix timestamp and see it in human-readable form (and vice versa) in any time zone. Most operating systems can convert in a terminal: "date -r 1776000000" on macOS, "date -d @1776000000" on Linux. Browser dev tools can convert with a one-line JavaScript snippet: new Date(1776000000 * 1000). And of course, Clockzilla's homepage displays the current Unix timestamp live alongside your local time — useful for any developer who wants a quick reference for what "now" looks like in Unix time. The number ticks up by one each second, which is oddly satisfying to watch.
The Bottom Line
Unix time is a count of seconds since January 1, 1970 UTC. It is the universal language computers use to talk about specific moments in time, with no ambiguity about time zones or formats. Modern systems use 64-bit integers to avoid the 2038 overflow problem. Most programming languages give you the current Unix timestamp in one line of code. And every time you see "posted 3 hours ago" on a website, you are looking at a Unix timestamp that was converted to friendly human language at the very last moment. It is not glamorous, but it is the quiet foundation that all of digital timekeeping is built on.
Try Clockzilla Free
Accurate world time for 150,000+ cities with timezone converter, sunrise/sunset calculator, stopwatch, Pomodoro timer, and more.
Open Clockzilla →