The key to creating gorgeous, glitchy YouTube images: anticipation and deletion

The key to creating gorgeous, glitchy YouTube images: anticipation and deletion

July 15, 2018 0 By Nazmul Khan


When I was younger, I had a soccer coach who stressed the importance of anticipation. “An-tiiii-ciiiiiii-PAY-shun,” he’d yell at us, while we were diving around for the ball. If we did it right, he promised, we’d be able to do in soccer what Neo does in The Matrix — not, like, stop bullets, but be in the right place at the right time to stop an attack on our goal. I wasn’t too great at it, at least not at first.

But the lesson stuck. I can hear coach’s voice even now, when I navigate the crush of travelers during New York City’s all-too-frequent rush hours. This is all to say that prediction is key; it’s the difference between getting the ball in the back of the net and whiffing entirely, the gap between getting a seat on a crowded train or having to wait, chastened, for the next one. And, as I recently learned, prediction is the difference between a YouTube video and glitch art.


The other day I came across a Twitter bot, @youtubeartifacts, which tweeted out screenshots and clips from random YouTube videos — but images and videos were bitcrushed and pixelated and kinetic, more abstract painting than encoding error.


David Kraftsow

There’s a name for this kind of glitched-out aestheticism, and it turns out to have a well-established artistic past. “The bot uses my own variation on an old glitch art technique called ‘datamoshing,’ which basically generates a specific kind of h264 compression glitch which creates the smeared, pixelated sometimes painterly artifacts you see in the output,” says David Kraftsow, the artist behind @youtubeartifacts. (H.264, also known as MPEG-4 Part 10 or Advanced Video Coding, is a video compression standard — for recording, compression, and distribution — widely used across the internet since around 2014, which provides better video quality than earlier ones.)

“It’s actually a somewhat old glitch art project of mine that’s gone through a lot of iterations, the most recent of which is the Twitter bot,” Kraftsow writes to me in an email. It started as a website in 2009, where anyone could enter a YouTube URL and see specific glitch effects in their browser — but it was hard to maintain, Kraftsow explains, which meant it didn’t last very long. Then, the curators of digital art collective Rhizome asked him to create a more robust version: a desktop app.

“I refashioned the site and had it look specifically for “vlogger” content to generate stills,” he said. “Then a few years ago” — Februrary 2015 — “I made the app into a Twitter bot, which itself has gone through a few versions. The most recent of which generates 4K imagery from a convoluted youtube search that looks for (among other things) vloggers, beauty/cosmetics vids, sports, and nature/landscape videos.”


David Kraftsow

As Kraftsow mentioned, datamoshing is a type of glitch art — which, in the context of art history, can be broadly defined as art created by corrupting or otherwise manipulating an existing file — that has roots in the net art movement of the early aughts. One of the most influential examples of the technique was a 2003 video called “Pastell Kompressor,” by the artists Owi Mahn and Laura Baginski. “As basis for ‘pastell compressor’ we have been using time- lapse shootings of clouds drifting by, which we took on the plateaus in the south of france [sic],” they wrote. They ran it through a proprietary codec, called “sörensen- 3,” which blended the French plateaus with a person’s figure. Two years later, the artist Takeshi Murata created “Monster Movie,” which blended footage from a 1981 B-movie and a heavy soundtrack and which is now in the permanent collection at the Smithsonian as perhaps the most influential piece in the datamosh canon. In 2009, Kanye West would use the technique in his video “Welcome To Heartbreak.”


Conceptually, datamoshing is pretty easy: To create the most basic version of those dramatic, pixelated effects, all you have to do is take advantage of how videos are encoded. Essentially, there are three kinds of frames, which store compressed images: I-frames, P-frames, and B-frames. As an excellent tutorial has it, I-frames are “inter frames,” which means they contain the frames’ image data. P-frames are “predictive frames,” which hold abstract information — essentially, they store data for how the video’s pixels move, and nearly nothing else. (B-frames are a little different, because they’re like predictive frames but they’re bi-directional; they don’t have much to do with glitching.) So, to datamosh, all you do is delete the I-frames. Delete the image data — all the identifiable, still images of the video — and you’re left with the abstract, interior information that populates the space between images. You just leave in the ann-tiii-ciii-PAY-shun, the predictions, which on their own produce the hallmark swirl of glitchy pixels that visually define a datamoshed video. Simple, right?

I decided to try it for myself, starting with something familiar: Verge Science’s excellent video on graphene that came out earlier this week. I cut the video down to 45 seconds using iMovie, which felt like a manageable enough length, then I ran it through Avidemux version 2.5.4 (a free, popular video editor) to delete my I-frames; then I used VLC (an excellent video player) to play back my results. (A good rule of thumb about I-frames is that, because they’re anchor points, they exist at just about every cut. Avidemux identifies them for you — just press the up and down arrow keys to scroll through every single one in a video.)

It took me six attempts and nearly an hour to get from the first 45 seconds of this…

…to this:

It was a little harder than I thought. But I persevered. I believed in my P-frames. Eventually, I got this.

It’s like my soccer coach might say: Perseverance is just as important as figuring out where your pixels are going.





Source link