Linux get current time milliseconds

Original Question

According to my googling I would expect to be able to set the date like so (the inverse of the above):

date +%s.%N -s `date +"%S.%N"` date +"%s.%N" -s "1323217126.085882000" 

Neither work. Can someone clue me in on the issue? P.S. No, I don’t need nanosecond resolution. Yes, I know bash execution takes milliseconds. What I really need is sub-second resolution, 10ths of a second would be good enough.

3 Answers 3

Here is a solution (Linux, NOT Unix):

date --set="2011-12-07 01:20:15.962" && date --rfc-3339=ns 
CURTIME=`date --rfc-3339=ns` date --set="$" NEWTIME=`date --rfc-3339=ns` echo $ echo $ 2011-12-07 01:48:54.687216122+00:00 2011-12-07 01:48:54.720541318+00:00 

As you’ll notice, whole milliseconds of delay are introduced. This is due to the time it takes to initialize memory for and load the date binary. This is true for all shells and the exec of insert-higher-level-language-here

However, if you just need sub-second resolution in the range of 10ths of a second, this will good enough in many cases.

And if you’re (like me) toying around and feel CURTIME is outdated too much use hwclock -s to update sys time from hardware time (and don’t use hwclock -w which will set hardware clock to sys time) ^^

If your goal is to set time in with sub-second intervals, then date seems like the wrong command if I am reading the source correctly.

Assuming you are using the coreutils version of date, then it appears to me that when.tv_nsec = 0 line is setting nanosecond portion of the when variable which is a timespec structure to zero. Even if you could convince date to accept a more accurate value, it looks to me like it would be pointless.

 # time.h struct timespec < __time_t tv_sec; /* Seconds. */ long int tv_nsec; /* Nanoseconds. */ >; # date.c struct timespec when; # . valid_date = posixtime (&when.tv_sec, datestr, (PDS_TRAILING_YEAR | PDS_CENTURY | PDS_SECONDS)); when.tv_nsec = 0; /* FIXME: posixtime should set this. */ # . /* Set the system clock to the specified date, then regardless of the success of that operation, format and print that date. */ if (settime (&when) != 0)

Источник

How do I get the current Unix time in milliseconds in Bash?

How do I get the current Unix time in milliseconds (i.e number of milliseconds since Unix epoch January 1 1970)?

18 Answers 18

This:

will return the number of seconds since the epoch.

This:

returns the seconds and current nanoseconds.

So:

will give you the number of milliseconds since the epoch — current seconds plus the left three of the nanoseconds.

and from MikeyB — echo $(($(date +%s%N)/1000000)) (dividing by 1000 only brings to microseconds)

Or if you want to do it all in the shell, avoiding the expensive overhead of an additional process (actually, we’re avoiding the problem when the number of digits in %+s+N changes): echo $(($(date +%s%N)/1000))

I think it’s worth noting that the man asked for Unix, not Linux, and the current top answer (date +%s%N) doesn’t work on my AIX system.

Читайте также:  Windows file sharing with linux

You may simply use %3N to truncate the nanoseconds to the 3 most significant digits (which then are milliseconds):

This works e.g. on my Kubuntu 12.04 (Precise Pangolin).

But be aware that %N may not be implemented depending on your target system. E.g. tested on an embedded system (buildroot rootfs, compiled using a non-HF ARM cross toolchain) there was no %N :

(And also my (non rooted) Android tablet doesn’t have %N .)

@warren: I saw that you edited and changed the 1397392146%3N to 1397392146%N , but the output of 1397392146%3N is that what I’d really seen on the busybox console of my android tablet. Could you explain your edit?

warren’s comment from the history is «changed from 3 to 6, as 3 only takes you to microseconds». His edit seems entirely spurious; you should roll it back.

This is a feature of GNU coreutils specifically. Ultimately, this is implemented in gnulib here: github.com/gagern/gnulib/blob/….

date +%N doesn’t work on OS X, but you could use one of

  • Ruby: ruby -e ‘puts Time.now.to_f’
  • Python: python -c ‘import time; print(int(time.time() * 1000))’
  • Node.js: node -e ‘console.log(Date.now())’
  • PHP: php -r ‘echo microtime(TRUE);’
  • Elixir: DateTime.utc_now() |> DateTime.to_unix(:millisecond)
  • The Internet: wget -qO- http://www.timeapi.org/utc/now?\\s.\\N
  • or for milliseconds rounded to nearest second date +%s000

Sure, you just have to wait for those interpreters to warm up. This works, too: wget -qO- http://www.timeapi.org/utc/now?\\s.\\N

apple.stackexchange.com/questions/135742/… has instructions for doing this in OSX via Brew’s coreutils

My solution is not the best, but it worked for me:

I just needed to convert a date like 2012-05-05 to milliseconds.

Just throwing this out there, but I think the correct formula with the division would be:

For the people that suggest running external programs to get the milliseconds. at that rate, you might as well do this:

wget -qO- http://www.timeapi.org/utc/now?\\s.\\N 

Point being: before picking any answer from here, please keep in mind that not all programs will run under one whole second. Measure!

You’re not asking the local system for the time. Which I guess is implied in the question. You also depend on a network connection.

@orkoden The question explicitly asks for «number of milliseconds since Unix epoch January 1 1970». Also, I’m more of pointing out how you shouldn’t ever fire up whole of Ruby or Python (or wget) just to get the time — either this is done through a fast channel or milliseconds don’t matter.

Yes, I understood that you were giving a worse solution to highlight the bad solutions’ flaws. I tried several solutions and measured the time. lpaste.net/119499 The results are kind of interesting. Even on a very fast i7 machine date takes 3 ms to run.

@Nakilon and this is why one shouldn’t rely on curl-able conveniences like those for anything production.

If you are looking for a way to display the length of time your script ran, the following will provide a (not completely accurate) result:

As near the beginning of your script as you can, enter the following

This’ll give you a starting value of something like 1361802943996000000.

At the end of your script, use the following

echo «runtime: $(echo «scale=3;($(date +%s%N) — $)/(1*10^09)» | bc) seconds»

which will display something like

(1*10^09) can be replaced with 1000000000 if you wish

Читайте также:  Linux удалить файлы по маске

«scale=3» is a rather rare setting that coerces bc to do what you want. There are lots more!

I only tested this on Windows 7/MinGW. I don’t have a proper *nix box at hand.

Another solution for MacOS: GNU Coreutils

I have noticed that the MacOS’ version of the date command is not interpreting the %N format sequence as nanoseconds but simply prints N to the output when I started using my .bashrc script from Linux, that’s using it to measure how long executed commands run, on a MacOS machine.

After a little bit of research, I have learned that only the GNU date from the GNU Coreutils package does support milliseconds. Fortunately, it’s pretty easy to install it on MacOS using Homebrew:

Since that package contains executables that are already present on MacOS, Coreutils’ executables will be installed with a g prefix, so date will be available as gdate .

See for example this page for further details.

For Alpine Linux (many Docker images) and possibly other minimal Linux environments, you can abuse adjtimex :

adjtimex | awk '/(time.tv_usec):/ < printf("%06d\n", $2) >' | head -c3 

adjtimex is used to read (and set) kernel time variables. With awk you can get the microseconds, and with head you can use the first 3 digits only.

I have no idea how reliable this command is.

Note: Shamelessly stolen from this answer

Not adding anything revolutionary here over the accepted answer, but just to make it reusable easily for those of you whom are newer to Bash. Note that this example works in OS X and on older Bash which was a requirement for me personally.

Here is how to get time in milliseconds without performing division. Maybe it’s faster.

# test=`date +%s%N` # testnum=$ # echo $ 1297327781715 

Update: Another alternative in pure Bash that works only with Bash 4.2+ is the same as above, but use printf to get the date. It will definitely be faster, because no processes are forked off the main one.

printf -v test '%(%s%N)T' -1 testnum=$ echo $

Another catch here though is that your strftime implementation should support %s and %N which is not the case on my test machine. See man strftime for supported options. Also see man bash to see printf syntax. -1 and -2 are special values for time.

Seems like my strftime(3) doesn’t support %N so no way for printf to print nanoseconds. I am using Ubuntu 14.04.

The most accurate timestamp we can get for Mac OS X is probably this:

python3 -c 'import datetime; print(datetime.datetime.now().strftime("%s.%f"))' 1490665305.021699 

But we need to keep in mind that it takes around 30 milliseconds to run. We can cut it to the scale of 2 digits fraction, and at the very beginning compute the average overhead of reading the time, and then remove it off the measurement. Here is an example:

function getTimestamp < echo `python -c 'import datetime; print datetime.datetime.now().strftime("%s.%f")' | cut -b1-13` >function getDiff < echo "$2-$1-$MeasuringCost" | bc >prev_a=`getTimestamp` acc=0 ITERATIONS=30 for i in `seq 1 $ITERATIONS`;do #echo -n $i a=`getTimestamp` #echo -n " $a" b=`echo "$a-$prev_a" | bc` prev_a=$a #echo " diff=$b" acc=`echo "$acc+$b" | bc` done MeasuringCost=`echo "scale=2; $acc/$ITERATIONS" | bc` echo "average: $MeasuringCost sec" t1=`getTimestamp` sleep 2 t2=`getTimestamp` echo "measured seconds: `getDiff $t1 $t2`" 

You can uncomment the echo commands to see better how it works.

Читайте также:  What is daemon command in linux

The results for this script are usually one of these 3 results:

measured seconds: 1.99 measured seconds: 2.00 measured seconds: 2.01 

Источник

C++ — How can we get a millisecond timestamp in linux?

How to convert std::chrono::monotonic_clock::now() to milliseconds and cast it to long? using steady_clock or high_resolution_clock from chrono is also same. I have seen into std::chrono::duration_cast but I only want the current timestamp and not any duration gaps.

clock_gettime() or gettimeofday() both get the current timestamp in a manner that is trivial to convert to milliseconds. Are you specifically looking to use constructs from std::chrono ?

I’m looking towards something that is not subjected to NTP adjustments or any change. I’ve heard clock_gettime() cannot be fully reliable. gettimeofday is changeable too.

clock_gettime(CLOCK_MONOTONIC) is the primitive used by std::chrono::monotonic_clock . It is as «not subjected to NTP adjustments or any change» as you can get.

@Zack: Look at the definitions of CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW here and you may be surprised.

So there is a CLOCK_MONOTONIC_RAW, strange. So may be if chrono uses clock_gettime() as the primitive , atleast it uses this raw type may be 🙂

2 Answers 2

The current timestamp is defined with respect to some point in time (hence it is a duration). For instance, it is «typical» to get a timestamp with respect to the beginning of the Epoch (January 1st 1970, in Unix). You can do that by using time_since_epoch() :

namespace chr = std::chrono; chr::time_point tp = chr::steady_clock::now(); std::cout (tp.time_since_epoch()).count()  

To get the value in milliseconds you would need to cast it to std::chrono::milliseconds , instead.

I would mark this the answer but I'm going to use the high_resolution_clock because it has the shortest possible tick available.

@King That should work too. now() returns a time_point for all the clocks, so you just need to change to clock type.

The epoch for a clock is not necessarily the Unix epoch of Jan 1st 1970. It may well be something else, such as boot time, or program start time.

Doesn't work for me. cannot convert from 'std::chrono::system_clock::time_point' to 'std::chrono::time_point<_Clock>'

All the built-in clocks have an associated "epoch" which is their base time. The actual date/time of the epoch is not specified, and may vary from clock to clock.

If you just want a number for comparisons then some-clock ::now().time_since_epoch() will give you a duration for the time since the epoch for that clock, which you can convert to an integer with the count() member of the duration type. The units of this will depend on the period of the clock. If you want specific units then use duration_cast first:

typedef std::chrono::steady_clock clk; unsigned long long milliseconds_since_epoch= std::chrono::duration_cast( clk::now().time_since_epoch()).count(); 

As I said, this is only good for comparisons, not as an absolute time stamp, since the epoch is unspecified.

If you need a UNIX timestamp then you need to use std::chrono::system_clock , which has a to_time_t() function for converting a time_point to a time_t .

Alternatively, you can take a baseline count at a particular point in your program, along with the corresponding time from gettimeofday or something, and then use that to convert relative counts to absolute times.

Источник

Оцените статью
Adblock
detector