First Project Post

The Open Software Package

The open software package I have chosen is FFMPEG. This software is used to encode and decode videos. Changing format of a file requires encoding which compresses the video into images then decompresses it.

Benchmark Overview

The chip architectures I will be using are Aarch64 and the x86_64. I will be using time as the measurement in order to benchmark my project. The first benchmark I will be doing is on the x86_64 machine.

x86_64 and Aarch64

I will be providing a video file which is created by FFMPEG. Below is the command to obtain the file. On this note, I tested the timing in order to get enough data to perform a 4 minute run time for the benchmark. Fun fact, if you don’t want a video that is 3 hours long that is just RGB lines, you can upload your own video. One of the most intensive load on the encoder are videos of fireworks.

ffmpeg -f lavfi -i testsrc=duration=10800:size=qcif:rate=9 testsrc2.mp4

The next step is to test the encoder. This is where the encoding and decoding happens. When encoding begins, FFMPEG starts compressing the video file into images. Most used compression techniques include image resize and interframe and video frame. Compression is used because every frame of a video is a snapshot of a picture and if we were to try encode a video without compressing, it would take large amounts of binary data. Below is the command used to convert one file format to another.

time ffmpeg -i testsrc2.mp4 -c:v libx264 -preset slow -crf 9 -c:a copy testsrc.avi

After a bit of testing to figure out the length of the video required, we end up with approximately a 4 minute run time on the command to convert video file format. When I talk about run time, I am referring to the CPU time or the user time, if you ran the command above. The time removes the time the kernel waits, such as a key stroke or lock if you are running multi-thread. A bit of information is also given by FFMPEG which includes frames. Above I talked about if compression wasn’t used there would be a lot of data to be transferred. Our video had a total 97200 frames. If FFMPEG had to go encode every pixel of every frame, it would take a lot of processing and time. Compression removes similar patterns between the frame which is why 97200 frames only took 4 minutes to encode and decode.

Benchmark Results

The results will be based on 5 runs of the program with the average time being taken as the benchmark to avoid any outliers such as server being slow. The optimization levels were not specified when configuring this software.

x86_64Aarch64
First
Iteration
4m23.797sFirst
Iteration
10m31.919s
Second
Iteration
4m24.806sSecond
Iteration
10m32.975s
Third
Iteration
4m25.251sThird
Iteration
10m29.189s
Fourth
Iteration
4m23.555sFourth
Iteration
10m30.087s
Fifth
Iteration
4m24.496sFifth
Iteration
10m27.644s
Average4 minutes and 24.381 secondsAverage10 minutes and 30.36280 seconds

Conclusion

In conclusion, when we built the software, we used the default optimization levels which is -O3. The optimization -O3 which has a lot of optimizations done, which gave us a quick run time.

Published by Danny Nguyen

I am a curious person. I find interest in all aspects of software development cycle, software stacks, and how the same software is used in different industries in different ways

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started
<span>%d</span> bloggers like this: