Thursday 13 December 2018

"The Hands on a Watch"



The premise seemed simple enough:

We had about 25 minutes of music we wished to post on youtube, and required the same amount of video to accompany it.

The music already contained material owned by others so, to hopefully limit the amount of potential licencing shenanigans youtube might inflict upon us, we needed 25 minutes of entirely copyright-free video.

We could have used a single static "title" image, or perhaps a slide-show of images, cross-fading from one to another, as is often to be seen in youtube music videos, but we were looking for something a bit more eye-catching - ideally images that actually kept in time with the music, and maybe reflected its content as well.

With this idea of "keeping in time" came the notion of using a clock face in some way.
Most modern music adheres to a 4/4 structure, with the music changing over time in blocks of 4 bars of 4 beats, and multiples thereof.
A count of "sixteen" would easily cover our metronomic needs - it can be sped up to 8 or 4 beats, or slowed to 32 or 64 as required.

And so the concept was to create several video sequences of a clock counting in multiples or divisions of 8 beats, stitch them all together to a total of 25-odd minutes, add a bit of variation and imagination to make the resulting video a bit more engaging, all the while ensuring that the clock ran perfectly in time with the music.

Here are some sketchy notes on the project...

What we did:


Using a tripod-mounted Canon DSLR with a remote control [to avoid camera shake], we snapped a series of seventeen images of an antique Hunter fob watch.
The watch has a sweeping second hand - as we wanted to keep the beat with the minute hand, we had to be on the ball taking shots at just the right second in each passing minute.
If we missed our mark, the watch was reset and we started again.
We were far from perfect with this task - if you watch the video closely, you can clearly see the second hand bobbing about [we decided to leave this in the cut].
We couldn't touch the watch between each shot as that would disturb the alignment of the series of photos, so we just had to let it run.


--- Photoshop ---


The resulting sequence of seventeen images [twelve o'clock to sixteen minutes past twelve] were resized to our desired video output dimensions - HD @ 1920 x 1080px.

We didn't clean up the images to any great degree - the "artifacts" within each frame would add some subtle variation to the final time-lapsed sequences.

Then various versions of this sequence were also created within photoshop, using filters too complex to go into great detail here.
Suffice to say, we blurred them, we sharpened them, we zoomed, we gave them a glow, we gave them texture, we changed their colour, we highlighted them, we ran them backwards - essentially, we mucked about, looking for usable variations on the theme, and created new sequences of those modified images as well.

Next up ...

--- Virtualdub ---



Ok, here's where things started to get a bit more complicated.

Our 25-minute piece of music was produced a long time ago, with a number of separate sections, each originating from different [vinyl] source recordings, which were not all perfectly matched in tempos.
So while the entire piece of music runs at approximately 95.13bpm, parts of it run at over 95.2bpm, while others crawl along at less than 95.1bpm.
This may not sound much by way of variation, but over a 25 minute stretch it is a noticeable difference - any clock running accurately at a fixed tempo over that period would soon be out of sync with the music.

We needed 8 / 16 / 32 beat videos of the watch sequence, running at 95.13bpm but, to keep the clock in sync with the music, we also had to create similar video sequences for the subtle variations in the music's bpm, each one with a different frame rate to reflect the different bpm.
This was a pain.

We could have run the music through Ableton or similar software and cleaned up the numerous tempo issues, but that would have been far too simple a solution...

--- Codecs, frame rates, compression and bit rates ---


[Beware, maths ahead!]

Instead, to match a bpm of 95.13, under "Video Frame Rate Control" [under "Video", then "Frame Rate"], the source rate adjustment for this tempo has to be 95.13 / 60 = 1.585, while the "Frame Rate Conversion" needs to be 29.97.


For reference...

At 95.13bpm, 1 beat = 0.6307secs, thus 16 beats = 10.0914secs.
So our component source 95.13bpm videos will be of lengths of multiples of approximately 10.09 secs.

And at 29.97fps, this works out as 302 frames for our 16 beats.

Then we had to knock up further sets of video sequences for the 95.2bpm sections, and the 95.1bpm section, so they had different running speeds, but the same number of frames.
As previously stated, a pain.

No doubt many of you are now thinking - why choose this meshuggenah method?
Why not correct these tempo variations in your video editing suite?
Short answer - we don't have access to the sort of video editing software which would allow us to perfectly sync the video to the frequently shifting audio tempo, whilst applying complex filters etc.
We know such things are out there, we just don't have access to them.
And so it goes.

Ideally, we would have liked to have used RAW image files all the way through the entire project, so there would be no degradation to the quality of the video during the various stages of its creation.
However, due to the vast amounts of space this would require, this was not in any way realistic, so the project was processed using the XVID codec with a moderately high bitrate.


Ideally again all video should be created with multiple passes with a high VBR [variable bitrate], but such are time constraints of this modern world that, unless it was deemed absolutely necessary, videos were generated with single passes.

Virtualdub, like Photoshop, was also used to apply filters to the movies to provide new sequences - of particular interest to us were the "invert", "motion blur" and "colorize" filters.

Finally, Virtualdub was also used to apply the logo and title overlay to the final cut.
The overlay image was created from the final frame of the Blufftitler-generated title sequence.
This was added to the completed video using the standard logo filter available in virtualdub, and the brightness and contrast of the movie was then adjusted to compensate for the effect the presence of the semi-transparent logo overlay had on the appearance of the movie. [The overlay is semi-transparent so that it interferes as little as possible with the actual underlying content.]


Once the overlay was applied, the video was "branded".
As well as helping to identify the video's content and origin, this also prohibits the visual content being appropriated by third-parties.
Why they would feel the need to do so is anyone's guess.


--- Soundforge ---


To cope with the vast amounts of video material, and the bpm variations in the audio, the entire project was broken down into about 10 or so subsections, running between 1 and 3 minutes each.
We used Soundforge to split up the audio into these subsections, and the corresponding videos were made to match each of these constituent audio parts.
We used this to precisely measure out, to the nearest millisecond, the individual sections of the audio, which provided a time-line and bpms for the matching video sections.

--- Fractal Generator - Chaos Pro ---


Let's just say this is complicated and time-consuming and leave it at that.


See "elsewhere".

--- Blufftitler ---



Used for the credits and for the final fractal sequence where we needed to run two videos, one on top of the other, with transparency.

First, ensure the output settings match those of the project: dimensions = HD 1920 x 1080, frame rate = 29.97fps.


Next, line up and resize the two videos for overlay according to the original image/video.
Adjust the lighting layers to reflect these changes.

Exporting Video: we experienced fewer issues with Blufftitler when exporting numbered .jpg frames rather than compressed video, so we used this method and relied on virtualdub to create videos from these exported images.

Video Duration: Blufftitler is lacking in precision with regards to this, and is limited to creating videos in lengths rounded up to whole seconds, so when creating your video/image sequence [export as numbered frames], go a second too long on the video duration, and then delete the excess images at the end of the sequence once the task is complete so that the total number of .jpg images created is equal to a multiple of 302.
Create a video from this image sequence using VirtualDub [codecs etc as before].

NB: Blufftitler can be temperamental...
Sometimes one of the component videos was processed while the other stuttered or halted completely - in this case, while the clock continued to tick away, the fractals stopped moving for seconds at a time.
The issue[s] were with memory and processing power, invariably because there's too much fine detail in the original fractal video component, and the software/machine couldn't handle it.
The first step in the solution was to turn off all unnecessary software - antivirus, wifi, browsers etc. and try again.

But if Blufftitler continues to fail [which it did] ...

"Rinsing"


Essentially, you're attempting to clean the source video [which is causing the problems] without significantly reducing the quality of the content.
    Export a .jpg image sequence of original video via virtualdub - this should remove any "glitches" from the original video.
    Create .avi video in virtualdub from this, with correct timing/frame rate/codec etc.
    Import this into moviemaker and then export as .mp4 video @ HD/29.97fps - this adds essential keyframes for Blufftitler to latch on to, and shifts the video to the H264/MPEG-4 codec.
    Import this .mp4 into avidemux and create a direct .avi copy.
    Use this .avi in blufftitler instead - may still not work, in which case you are possibly stuffed.

You can also try modifying bitrates and quantize levels in the codec, and adding motion blur when creating the frame-rate corrected source video in virtualdub.
Sometimes these methods work, but not always.

Q: Why not do this for all the video components?
Because there will invariably be some loss of quality with all this muxing and demuxing.

--- Moviemaker ---



The bulk of the project was implemented here.

Why did we use this software?
    We had some previous experience with this bag of tricks.
    It has [had - see below] some nice features - in particular its edge-detection filter.
    It's all we had.

Sadly, the more modern versions of this Microsoft freebie are not as versatile as earlier incarnations.
However, to run an old version requires an old operating system [windows 7 or less], which was a problem and we couldn't source a machine capable both of running the old OS and still being able to process of vast amounts of HD video. The process would simply take too long, or the machine would fall over in a heap.
So we made do with a newer, less versatile version running on newer, faster machines, which is undeniably a shame, as everything about the later versions [2012 onwards] is mediocre when compared to predecessors.
And so it goes.

Within moviemaker, video was laid out along the time-line using the bpm-matched source videos previously constructed in either virtualdub or blufftitler.
These sections were cut up into individual frames or groups of frames and then modified in numerous ways from adding filters and fades to changing their running speeds.
And from this the video was edited to correspond to the matching audio.

It's worthwhile noting at this point that it can a bit of a challenge to produce nearly 20 minutes [close to half a million frames] of dynamic HD video from just 17 almost identical photographs.

--- Avidemux ---




Video and audio were stuck together using this.
Sections were viewed, and if deemed unacceptable, we went back to moviemaker and tried again.

As well as being useful for quickly stitching together the individual components of the video, Avidemux also quickly provides accurate video lengths, to the millisecond.
Plus it is very useful for "rinsing" - see above.

--- Project Facts & Stats ---


Our 25+ minutes of HD video and audio weighed in a just under a hefty 4 gig - it took a while to upload to youtube!
Total project time - unknown.
The original fractals took weeks to generate.
At least 200+ hours then spent on constructing the video itself.
See here for details of the motivation behind this.

Here's the final result:





Which all brings us very neatly to...

Our Video Competition
--- aka ---
"We think you can do better"


--- Budding film-makers & animators ---
!!! ALERT !!!
Get you work seen by a larger audience...


We're intending to use the same technique as outlined above to post the best of our AbRAd's sessions onto youtube [with video].
As each of these sessions is at least an hour and a half long, we're consequently looking for some more "ticking-watch" footage to use in them.

So here's the plan:
    We provide you with the original sequence of photos of the watch face, plus a piece of music to sync and beat-match the images to, and you create video[s] from them, upload the result and send us the link.

    All winning entries in the competition will be used in our subsequent youtube videos sessions, and you will be fully credited for your contributions.

The music we have chosen is the rather wonderful : Dele Sosimi Afrobeat Orchestra - "Too much Information"

It runs at a steady bpm, and has a fair amount of variation to it.

Here's a copy on youtube for you to listen to:



Competition Rules:


The piece of music we have provided runs at exactly 120.00 bpm and is 8 minutes and 32 seconds long.
Please make sure your videos match this music and its tempo.

All video entries must be submitted with the following specifications:

    Dimensions: 1920 x 1080 HD
    Frame rate: 29.97fps
    Codec:
      XVID - 15,000kps or higher bitrate
      Quantize - 1.04 or lower

    Running times of videos must be AT LEAST 8.32 - the length of the original piece of music.

    Videos must be uploaded directly to Google Drive or to Dropbox or to Youtube itself.

    You must then send us an email containing a link to your uploaded video[s].

    The email must be titled "AbstractRadio Youtube Competition".

    Email the links for your entries to : abstractradiomail@gmail.com

Please do not attempt to send videos directly to us - they will not be viewed - only send us the links.



There is currently no closing date for the competition.

You do not have to use the software or methods described above, but they may help!

Good luck and have fun, and please feel free to contact us via the email provided above if you have any questions regarding the competition details.

--- PS ---


In case you're wondering about the origin of the Star Trek samples, they are from the original version, released back in 1990 on Zoom Records.


Purchased back in the day from one of the guys who made the track [unsure if it was Nice or Nasty] - they ran Zoom from a little basement on Camden High Street.
Happy days.

1 comment:

Anonymous said...

Honestly after reading all that. All the work and effort you put in just makes me appreciate it even more.
I honestly believe that one project should make you all rich beyond your wildest dreams. I can't thank you and congratulate you enoyfb. Absolutely amazing. Throughly lived every minute of it. Thank you.

Post a Comment

AbstractRadio - for you listening pleasure....
Powered by sheer force of will.