Posts Tagged video

We got married (to each other) and then did cute couple things on a boat!


Remember that time that Joey Marianer and I got engaged (to each other)? Well, a while after that, we also got married, as is typical for engaged couples. It was just a small ceremony in a courthouse, followed by a small gathering with two large cheesecakes. Here’s a very short video synopsis of the wedding:

I just edited the closed captions, and noticed that YouTube’s autocaptions thought the judge said, ‘congratulations youtube and you guys get a blog it’s okay’ when in fact he said, ‘Congratulations, you two. And you guys can applaud; it’s okay!’ Anyway, that video has been gathering congratulations on YouTube for a while, so now you guys get a blog about it.

Then we ate cheesecake with a few friends, and Joey did this:

If you don’t understand what just happened, here’s a video that will explain it.

A few days and two negative COVID tests later, we boarded the JoCo Cruise. On cosplay day we cosplayed each other:

Two photos side by side. The left one is labelled 'everyday' and shows me wearing a pink gingham skirt, a black T-shirt with a picture of Steve Wozniak in the rainbow Apple logo colours, an Apple logo necklace, and a blue custom mask, while Joey is wearing peach-coloured board shorts, a white T-shirt showing palm trees in a sunset, and a white mask. The right photo is labelled 'cosplay day' and has Joey wearing the pink skirt, the Steve Wozniak T-shirt in a larger size, the necklace, and the blue mask, and me wearing pink board shorts, the white palm tree shirt, and a white mask.
What we wore the day before cosplay day vs. what we wore on cosplay day. Joey didn’t wear a wig because the masks are uncomfortable enough. The long-time fans know that long hair and a tiara is canon for Joey cosplay.

This was mainly to facilitate changing into our pants for the Fancy Pants Parade. I haven’t found a video that shows us in the parade itself, but here’s one of us practising last year, when we thought the cruise might be virtual again so we’d need a video:

Then two days later, all decked out for formal night, we did a show, which was also quite geared towards things we could only do while physically together. I structured the setlist to tell the story of how our relationship developed through collaborating on songs.

If you prefer, there’s also a playlist of individual pieces. I even made a playlist of the videos mentioned in the show, complete with the comments in which I might have been flirting with Joey. Some of the poems and songs are on my album, one is on our album of SpinTunes entries, some are on the playlist of Hallelujahs, but three of them have never been published anywhere before.

I’ve wanted to do a poetry show on the cruise for a while, but was always afraid of having to miss out on other events that were scheduled at the same time as it. I did a show on the 2021 virtual cruise where that wasn’t so much of a concern, and people seemed to like it, so an in-person sequel seemed like a good idea. Only, with Joey’s help, it was not merely a poetry show, but also a musical show!

My fears were somewhat realised; this show was scheduled opposite the reception for frequent cruisers, which most of the people who know me well enough to attend my show were eligible to go to. But the other thing those people know is that I film every event I’m at (so far I’ve uploaded 12 hours, 48 minutes of video from the 2022 cruise, and I’m only up to Wednesday afternoon), so they could safely miss the show and watch it now! We got through the setlist with a little time to spare, so we even got to attend the reception briefly.

, , , , , , , , ,

Leave a comment

How I got to work at CERN (video) and some rambling about video (text)


Right before JoCo Cruise 2020, I bought a 256GB SD card, just as a treat, so I wouldn’t have to worry about switching between 64GB cards and offloading videos to my Mac when recording as many events as possible. I discovered too late that my camera (a Nikon P7700) was too old to support 256GB cards, and when I got home from the cruise we were in lockdown, so I couldn’t return the card. This ultimately led to my buying a new camera (a Sony ZV-1) in late 2021 which would support the card, and I am very happy with this application of the sunk cost fallacy.

I planned to test out the camera at a Burning Hell show in Vienna (which would have been my first in-person concert since the cruise), but Austria went back into lockdown, so instead, I recorded myself talking about how I got to work at CERN, as a sequel to That time Steve Wozniak bought me a laptop and That time Steve Wozniak taught me to Segway and then played Tetris and pranks through a concert. I recorded 36 minutes continuously, in 4K, and I ran out of things to say before the camera had to stop for any reason. My old camera would have to stop after less than 30 minutes recording in 1080p, due to the 4GB file size limit, so I’ll call that a success.

Whether my 36 minutes of talking about my route to CERN is worth watching is up to you to decide:

The video is fully closed-captioned by me, a human, so if you prefer skimming text to watching videos, click the symbol (at the end of the line below the title) and choose Open Transcript.

The Burning Hell are making another attempt at a Vienna show in September, so here’s hoping I can run that original experimental design.

I’ve since been on JoCo Cruise 2022, and found the camera much less stress than my old one for recording concerts. I not only don’t need to change the card as often, I’m also not limited to 4GB files, or by the battery capacity, since it can use an external battery pack. So I’ve recorded most events continuously. The new camera also shows me what it’s focussing on, and I can change that using the touch screen, so from what I’ve seen so far, I haven’t had any incidents of an entire video being out of focus.

The only issue I had with the new camera is that if I start recording video immediately after turning it on, it doesn’t show it’s recording for another second or so, so I’d often press the button again and inadvertently stop recording in my attempt to start recording. In most cases I realised what had happened immediately and start recording again, but in one case I didn’t notice for a while and missed the introduction of a performer.

I used to do most of my lightweight cruise video editing in QuickTime Player, but for whatever reason its ‘Split Clip’ option is disabled for the videos from my new camera, so I’m trying out LosslessCut. It has a few issues, but I’ve found workarounds for them. One great thing about it is when I cut a show into individual parts, it can not only export those parts as individual videos, but also give me a list of times for those parts. I can paste them into the YouTube description of the full video so that they show up as chapters.

With the help of to that feature, I’m uploading most of them as full shows with chapter markers, and putting them into a playlist of the whole cruise as I do so. I’m also splitting shows into individual songs/stories/questions when relevant, and uploading those as separate videos, so I can add the individual pieces to other relevant playlists. Look in the description of any of the full-show videos to find a link to a playlist of the individual parts of the show. It will take a while to get everything processed and uploaded, so subscribe or check back later to see more.

Another thing I did (and recorded some video of) during that trip was marry Joey Marianer, but that can have its own blog post later. If you’re impatient, you can check out the Joey-Angela Merger playlist.

, ,

Leave a comment

A successful ploy to increase engagement


Well, in 2021, among other things, I released an iOS app and a poetry album, wrote an article about accessibility, tech edited three articles about iOS development, won my second Fancy Pants Parade, did a poetry show, wrote a macOS app to find words that look or sound like they’re related but aren’t and a script to make etymological family trees, found a job, lost a job, found a job again, and finally buried a job in soft peat for three months and recycled it as firelighters (that last bit is an exaggeration. Burning jobs to keep warm is not advisable.)

Here’s another exciting thing that happened that I didn’t mention on this blog. During a brief lull in the apocalypse, Joey Marianer came to visit, and we got engaged… to each other! We had of course already discussed this previously, and I wasn’t expecting a song and dance to be made about it, but there was nevertheless a song, as follows:

It’s a parody of the “Weird Al” Yankovic original, “Good Enough for Now“. I find metal rings uncomfortable and a bit dangerous, so Joey got me a silicone engagement ring with a ring on it. This is a much cooler idea than the off-the-shelf ring I got Joey which has flowers on it and no explicit mathematical concepts.

The pretense for recording that was that immediately beforehand, we’d sung some words I’d written to a tune that came to Joey in a dream:

Joey happened to be here while my friend Phil got married (a year later than planned) and joined a group of Phil’s vaccinated and tested friends to celebrate in Tenerife. So here we are walking along the beach looking all couple-y.

Angela and Joey holding hands walking in wet sand along the edge of the waves on a beach in Tenerife. We're both wearing pink board shorts and light-coloured T-shirts. In the background are blue skies and apartment buildings.

I’ll eventually put up videos of some things we saw in Tenerife. After we got back from Tenerife but before Joey went home, we recorded a few short videos in which we are exceedingly cute at each other while demonstrating some linguistic concepts. Here we explore the differences in our accents:

And here we demonstrate how personal deixis can change the meaning of a sentence depending on context:

So, plague willing, we’ll get married in February, have multiple wedding-adjacent cake-eating parties in various real and virtual places over the next several years, and at some point during that time I’ll get the appropriate visa so we can move in together and hopefully only get on each other’s c-tactile nerves.

And now for some unrelated things to look forward to on my YouTube channel. The above videos were shot on my iPhone, which was my first experience with 4K HDR. I’m not sure if editing that on my mid-2014 MacBook Pro did the HDR justice.

However, I bought a new camera recently which can do 4K, and also has several other features which will make recording concerts (and indeed, entire cruises full of concerts) easier — no more stopping to get around a 4GB file size limit, or change batteries, or change SD cards. I won’t generally film entire concerts in 4K due to the space requirements and likelihood of the camera overheating and shutting down, but it’s a nice feature to have for other things. I’ve also ordered the new MacBook Pro, which will have a better display for viewing and editing such video.

I planned to film as much as possible of a concert here in Vienna in 4K, just to see how long I could film continuously in 4K if I took all the measures I knew about to prevent overheating. The concert had to be cancelled due to lockdown, so instead, I recorded myself talking about how I got to work at CERN, as a sequel to the video about getting a laptop from Woz and going to a concert with him. I recorded in 4K for 36 minutes nonstop (which is longer than my old camera can record nonstop even in 1080p) before I ran out of things to say, so I’d call that a successful test. When the new MacBook arrives, I’ll edit that video and hopefully put it online before flying away to get married and (insert SARS-CaVeat here) record an entire cruise full of concerts. I hope I remember how to record and process an entire cruise full of concerts after a year off, and don’t make too many mistakes with the new camera.

, , , , , , , , ,

Leave a comment

My Fancy Pants on JoCo Cruise 2021


I had some plans for my entry into the JoCo Cruise 2021 Fancy Pants Parade, but they involved being on an actual cruise ship. When it went virtual, I assumed there would be no parade. When the call for video submissions came on 16 March, with the deadline on 31 March, I was unprepared. I’m not shopping in-person, and I didn’t think I’d be able to order materials and make anything in time.

But as much as the virtual cruise makes it impossible to do some things we would do on the real cruise, it also makes it possible to do things we couldn’t do on the real cruise. In one in-person Fancy Pants Parade, there was a person in a motion capture suit holding a sign saying ‘we’ll fix it in post’, and also a person in a green screen suit (who was controlling the tentacles of their partner’s pants.) In a virtual Fancy Pants Parade, we really can fix it in post. So I decided to try using my pants as a green screen — for what, I wasn’t sure.

At first I thought I’d try with some black jeans and hope I could tune the green screen effect for them, but then I realised I actually had blue-green jeans (purchased purely because I was excited to find jeans that were the right length for me.) I paraded ridiculously across the room in them, and Final Cut Pro immediately recognised them as the colour to apply the green screen effect to.

I settled on showing footage from previous Fancy Pants Parades on my pants. At first I thought I’d use my own pants, to not steal anyone else’s glory, but I didn’t have footage of all my own pants. I went with the winning pants from each parade, making this sort of a restrospective — a celebration of the whole tradition of Fancy Pants Parades. As the live version of Mr. Fancy Pants often says, chances are you’re best in everybody’s pants.

After submitting my entry, I duplicated the footage, enabling different settings in each copy, to make this short step-by-step. I’ve never used a green screen effect before, so this was me learning as I went along.

I submitted my video on 21 March. On 30 March, the JoCo Cruise Home Office sent out an email saying they’d only received one submission so far, and Jonathan was “nigh-inconsolable” about it. So I encouraged some friends to submit some — as I mentioned in my last post, winning by default is not as much fun as winning by crushing the hopes and dreams of your friends. So here’s how the Fancy Pants Parade went. Watch it before reading the rest of the post if you don’t want the result spoiled:

There was a lively exploration of the problem space of pants. What is fancy? Does it modify ‘pants’, or ‘parade’? What are the most important components of being ‘best in terms of pants’: physical pants-crafting, presentation, or spirit? And is that fancy pants spirit, or we’ve-been-home-for-a-year spirit? Still, it seemed that at least the chat comments were mostly in my favour, until, in a shocking twist, they found Gina’s video, which had been accidentally left out of the parade. And hers, too, used some movie magic! More debate: Culture and history? Conception, or construction? All pants, no dance? If you are silent, the pants will speak. I put my pants on one leg at a time, but in four dimensions, somehow.

It came down to a vote, and… I won! But all the particiPANTS were winners.

This is my second win… as you might guess from this year’s video, I also won in 2014. I am not the first person to win twice — the 2016 winner had also won previously, I think in 2013.

, , , , , , , ,

Leave a comment

Audio Word Clouds


For my comprehensive channel trailer, I created a word cloud of the words used in titles and descriptions of the videos uploaded each month. Word clouds have been around for a while now, so that’s nothing unusual. For the soundtrack, I wanted to make audio versions of these word clouds using text-to-speech, with the most common words being spoken louder. This way people with either hearing or vision impairments would have a somewhat similar experience of the trailer, and people with no such impairments would have the same surplus of information blasted at them in two ways.

I checked to see if anyone had made audio word clouds before, and found Audio Cloud: Creation and Rendering, which makes me wonder if I should write an academic paper about my audio word clouds. That paper describes an audio word cloud created from audio recordings using speech-to-text, while I wanted to create one from text using text-to-speech. I was mainly interested in any insights into the number of words we could perceive at once at various volumes or voices. In the end, I just tried a few things and used my own perception and that of a few friends to decide what worked. Did it work? You tell me.

Part of the System Voice menu in the Speech section of the Accessibility panel of the macOS Catalina System Preferences

Voices

There’s a huge variety of English voices available on macOS, with accents from Australia, India, Ireland, Scotland, South Africa, the United Kingdom, and the United States, and I’ve installed most of them. I excluded the voices whose speaking speed can’t be changed, such as Good News, and a few novelty voices, such as Bubbles, which aren’t comprehensible enough when there’s a lot of noise from other voices. I ended up with 30 usable voices. I increased the volume of a few which were harder to understand when quiet.

I wondered whether it might work best with only one or a few voices or accents in each cloud, analogous to the single font in each visual word cloud. That way people would have a little time to adapt to understand those specific voices rather than struggling with an unfamiliar voice or accent with each word. On the other hand, maybe it would be better to have as many voices as possible in each word cloud so that people could distinguish between words spoken simultaneously by voice, just as we do in real life. In the end I chose the voice for each word randomly, and never got around to trying the fewer-distinct-voices version. Being already familiar with many of these voices, I’m not sure I would have been a good judge of whether that made it easier to get used to them.

Arranging the words

It turns out making an audio word cloud is simpler than making a visual one. There’s only one dimension in an audio word cloud — time. Volume could be thought of as sort of a second dimension, as my code would search through the time span for a free rectangle of the right duration with enough free volume. I later wrote an AppleScript to create ‘visual audio word clouds’ in OmniGraffle showing how the words fit into a time/volume rectangle.  I’ve thus illustrated this post with a visual word cloud of this post, and a few audio word clouds and visual audio word clouds of this post with various settings.

A visual representation of an audio word cloud of an early version of this post, with the same hubbub factor as was used in the video. The horizontal axis represents time, and the vertical axis represents volume. Rectangles in blue with the darker gradient to the right represent words panned to the right, while those in red with the darker gradient to the left represent words panned to the left.

However, words in an audio word cloud can’t be oriented vertically as they can in a visual word cloud, nor can there really be ‘vertical’ space between two words, so it was only necessary to search along one dimension for a suitable space. I limited the word clouds to five seconds, and discarded any words that wouldn’t fit in that time, since it’s a lot easier to display 301032 words somewhat understandably in nine minutes than it is to speak them. I used the most common (and therefore louder) words first, sorted by length, and stopped filling the audio word cloud once I reached a word that would no longer fit. It would sometimes still be possible to fit a shorter, less common word in that cloud, but I didn’t want to include words much less common than the words I had to exclude.

I set a preferred volume for each word based on its frequency (with a given minimum and maximum volume so I wouldn’t end up with a hundred extremely quiet words spoken at once) and decided on a maximum total volume allowed at any given point. I didn’t particularly take into account the logarithmic nature of sound perception. I then found a time in the word cloud where the word would fit at its preferred volume when spoken by the randomly-chosen voice. If it didn’t fit, I would see if there was room to put it at a lower volume. If not, I’d look for places it could fit by increasing the speaking speed (up to a given maximum) and if there was still nowhere, I’d increase the speaking speed and decrease the volume at once. I’d prioritise reducing the volume over increasing the speed, to keep it understandable to people not used to VoiceOver-level speaking speeds. Because of the one-and-a-bit dimensionality of the audio word cloud, it was easy to determine how much to decrease the volume and/or increase the speed to fill any gap exactly. However, I was still left with gaps too short to fit any word at an understandable speed, and slivers of remaining volume smaller than my per-word minimum.

A visual representation of an audio word cloud of this post, with a hubbub factor that could allow two additional words to be spoken at the same time as the others.

I experimented with different minimum and maximum word volumes, and maximum total volumes, which all affected how many voices might speak at once (the ‘hubbub level’, as I call it). Quite late in the game, I realised I could have some voices in the right ear and some in the left, which makes it easier to distinguish them. In theory, each word could be coming from a random location around the listener, but I kept to left and right — in fact, I generated separate left and right tracks and adjusted the panning in Final Cut Pro. Rather than changing the logic to have two separate channels to search for audio space in, I simply made my app alternate between left and right when creating the final tracks. By doing this, I could increase the total hubbub level while keeping many of the words understandable. However, the longer it went on for, the more taxing it was to listen to, so I decided to keep the hubbub level fairly low.

The algorithm is deterministic, but since voices are chosen randomly, and different voices take different amounts of time to speak the same words even at the same number of words per minute, the audio word clouds created from the same text can differ considerably. Once I’d decided on the hubbub level, I got my app to create a random one for each month, then regenerated any where I thought certain words were too difficult to understand.

Capitalisation

The visual word cloud from December 2019, with both ‘Competition’ and the lowercase ‘competition’ featured prominently

In my visual word clouds, I kept the algorithm case-sensitive, so that a word with the same spelling but different capitalisation would be counted as a separate word, and displayed twice. There are arguments for keeping it like this, and arguments to collapse capitalisations into the same word — but which capitalisation of it? My main reason for keeping the case-sensitivity was so that the word cloud of Joey singing the entries to our MathsJam Competition Competition competition would have the word ‘competition’ in it twice.

Sometimes these really are separate words with different meanings (e.g. US and us, apple and Apple, polish and Polish, together and ToGetHer) and sometimes they’re not. Sometimes these two words with different meanings are pronounced the same way, other times they’re not. But at least in a visual word cloud, the viewer always has a way of understanding why the same word appears twice. For the audio word cloud, I decided to treat different capitalisations as the same word, but as I’ve mentioned, capitalisation does matter in the pronunciation, so I needed to be careful about which capitalisation of each word to send to the text-to-speech engine. Most voices pronounce ‘JoCo’ (short for Jonathan Coulton, pronounced with the same vowels as ‘go-go’) correctly, but would pronounce ‘joco’ or ‘Joco’ as ‘jocko’, with a different vowel in the first syllable. I ended up counting any words with non-initial capitals (e.g. JoCo, US) as separate words, but treating title-case words (with only the initial letter capitalised) as the same as all-lowercase, and pronouncing them in title-case so I wouldn’t risk mispronouncing names.

Further work

A really smart version of this would get the pronunciation of each word in context (the same way my rhyming dictionary rhyme.science finds rhymes for the different pronunciations of homographs, e.g. bow), group them by how they were pronounced, and make a word cloud of words grouped entirely by pronunciation rather than spelling, so ‘polish’ and ‘Polish’ would appear separately but there would be no danger of, say ‘rain’ and ‘reign’ both appearing in the audio word cloud and sounding like duplicates. However, which words are actually pronounced the same depend on the accent (e.g. whether ‘cot’ and ‘caught’ sound the same) and text normalisation of the voice — you might have noticed that some of the audio word clouds in the trailer have ‘aye-aye’ while others have ‘two’ for the Roman numeral ‘II’.

Similarly, a really smart visual word cloud would use natural language processing to separate out different meanings of homographs (e.g. bow🎀, bow🏹, bow🚢, and bow🙇🏻‍♀️) and display them in some way that made it obvious which was which, e.g. by using different symbols, fonts, styles, colours for different parts of speech. It could also recognise names and keep multi-word names together, count words with the same lemma as the same, and cluster words by semantic similarity, thus putting ‘Zoe Keating’ near ‘cello’, and ‘Zoe Gray’ near ‘Brian Gray’ and far away from ‘Blue’. Perhaps I’ll work on that next.

A visual word cloud of this blog post about audio word clouds, superimposed on a visual representation of an audio word cloud of this blog post about audio word clouds.

I’ve recently been updated to a new WordPress editor whose ‘preview’ function gives a ‘page not found’ error, so I’m just going to publish this and hope it looks okay. If you’re here early enough to see that it doesn’t, thanks for being so enthusiastic!

, , , , , ,

1 Comment

How to fit 301032 words into nine minutes


A few months ago I wrote an app to download my YouTube metadata, and I blogged some statistics about it and some haiku I found in my video titles and descriptions. I also created a few word clouds from the titles and descriptions. In that post, I said:

Next perhaps I’ll make word clouds of my YouTube descriptions from various time periods, to show what I was uploading at the time. […] Eventually, some of the content I create from my YouTube metadata will make it into a YouTube video of its own — perhaps finally a real channel trailer. 

Me, two and a third months ago

TL;DR: I made a channel trailer of audiovisual word clouds showing each month of uploads:

It seemed like the only way to do justice to the number and variety of videos I’ve uploaded over the past thirteen years. My channel doesn’t exactly have a content strategy. This is best watched on a large screen with stereo sound, but there is no way you will catch everything anyway. Prepare to be overwhelmed.

Now for the ‘too long; don’t feel obliged to read’ part on how I did it. I’ve uploaded videos in 107 distinct months, so creating a word cloud for each month using wordclouds.com seemed tedious and slow. I looked into web APIs for creating word clouds automatically, and added the code to my app to call them, but then I realised I’d have to sign up for an account, including a payment method, and once I ran out of free word clouds I’d be paying a couple of cents each. That could easily add up to $5 or more if I wanted to try different settings! So obviously I would need to spend many hours programming to avoid that expense.

I have a well-deserved reputation for being something of a gadget freak, and am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand. Ten seconds, I tell myself, is ten seconds. Time is valuable and ten seconds’ worth of it is well worth the investment of a day’s happy activity working out a way of saving it.

Douglas Adams in ‘Last chance to see…’

I searched for free word cloud code in Swift, downloaded the first one I found, and then it was a simple matter of changing it to work on macOS instead of iOS, fixing some alignment issues, getting it to create an image instead of arranging text labels, adding some code to count word frequencies and exclude common English words, giving it colour schemes, background images, and the ability to show smaller words inside characters of other words, getting it to work in 1116 different fonts, export a copy of the cloud to disk at various points during the progress, and also create a straightforward text rendering using the same colour scheme as a word cloud for the intro… before I knew it, I had an app that would automatically create a word cloud from the titles and descriptions of each month’s public uploads, shown over the thumbnail of the most-viewed video from that month, in colour schemes chosen randomly from the ones I’d created in the app, and a different font for each month. I’m not going to submit a pull request; the code is essentially unrecognisable now.

In case any of the thumbnails spark your curiosity, or you just think the trailer was too short and you’d rather watch 107 full videos to get an idea of my channel, here is a playlist of all the videos whose thumbnails are shown in this video:

It’s a mixture of super-popular videos and videos which didn’t have much competition in a given month.

Of course, I needed a soundtrack for my trailer. Music wouldn’t do, because that would reduce my channel trailer to a mere song for anyone who couldn’t see it well. So I wrote some code to make an audio version of each word cloud (or however much of it could fit into five seconds without too many overlapping voices) using the many text-to-speech voices in macOS, with the most common words being spoken louder. I’ll write a separate post about that; I started writing it up here and it got too long.

The handwritten thank you notes at the end were mostly from members of the JoCo Cruise postcard trading club, although one came with a pandemic care package from my current employer. I have regaled people there with various ridiculous stories about my life, and shown them my channel. You’re all most welcome; it’s been fun rewatching the concert videos myself while preparing to upload, and it’s always great to know other people enjoy them too.

I put all the images and sounds together into a video using Final Cut Pro 10.4.8. This was all done on my mid-2014 Retina 15-inch MacBook Pro, Sneuf.

, , , , , ,

2 Comments

Unintentional Haiku in my YouTube Video Descriptions


Since I wrote a little app to download much of my YouTube metadata, it was obvious that I needed to feed it through another little app I wrote: Haiku Detector. So I did. In all of my public YouTube descriptions put together, with URLs removed, there are 26 172 sentences, and 436 detected haiku.

As is usually the case, a few of these ‘haiku’ were not really haiku because the Mac speech synthesis pronounces them wrong, and thus Haiku Detector counts their syllables incorrectly. A few more involved sentences which no longer made sense because their URLs had been removed, or which were partial sentences from song lyrics which looked like full sentences because they were on lines of their own. Most of the rest just weren’t very interesting.

There were quite a lot of song lyrics which fit into haiku, which suggest tunes to which other haiku can be sung, if the stress patterns match up. I’m not going to put those here though; there are too many, and I could make a separate post about haiku in Jonathan Coulton lyrics, having already compiled a JoCorpus for rhyme.science to find rhymes in. So here are some other categories of haiku I liked. For lack of a better idea, I’ll link the first word of each one to the video it’s from.

Apologies about my camerawork

Also, there’s a lot
of background noise so the sound
isn’t very good.

There was a little
too much light and sound for my
poor little camera. 🙂

But hey, if I’d brought
my external microphone,
it would have got wet.

I’m so sad that I
had to change batteries or
something part-way through. 😦

Who do I look like,
Joe Covenant in Glasgow
in 2008?

Now the guitar is
out of tune and my camera
is out of focus.

Performers being their typical selves

John Roderick:

Eventually
they get around to singing
the song Cinnamon.

Aimee Mann asks John
Roderick to play one of
his songs (which he wrote.)

Jim Boggia:

But first, he gives us
a taste of what he’s really
famous for: tuning.

And now he’s lost his
voice, so it’s going to be
great for everything.

Cody Wymore:

Cody Wymore can’t
do a set without Stephen
Sondheim in it.

Cody horns in on
it anyway by adding
a piano part.

He pauses time for
a bit so nobody knows
he was unprepared.

It’s about being
in a room full of people
and feeling alone.

Paul and Storm:

Why does every new
verse of their song keep taking
them so goddamn long?

Little did I know
that four other people would
throw panties at Paul.

Ted Leo:

We’re gonna bring the
mood down a little bit, but
maybe lift it up!

Nerf Herder:

Meanwhile, they have to
fix up the drums because I
guess they rocked too hard.

Zoe and Brian Gray:

It’s For the Glory
of Gleeble Glorp, which isn’t
a euphemism.

Zoe Gray has to
follow Brian Gray’s songs from
the Gleebleverse.

Clint McElroy:

He’s here to perform
for us an amazing act
of léger de main.

Travis McElroy:

Travis gets up on
stage and holds a small doll’s head
in a creepy way.

which brings us to Jonathan Coulton:

He loves us and is
very glad to be with us.
This is Creepy Doll.

Jonathan Coulton
remarks on the lax rhyming
in God Save The Queen.

Jonathan will use
Jim’s capo, and he will give
it back afterwards.

Jonathan did not
know this was going to be
a cardio set.

That guy Paul has been
seeing every goddamned day
for the last two months.

MC Frontalot:

MC Frontalot
talks about samples and tells
us what hiphop is.

Jean Grae:

It’s not because she’s
a lady, but because she’s
an alcoholic.

She feels like she should
get a guitar case, even
without a guitar.

Jon Spurney:

Jon Spurney rocks out
on the guitar solo, as
he is wont to do.

Me:

Eventually,
at about 6:38,
we get to the point.

The ship’s IT guy:

He has been very
glad to meet us, but he’s not
sad to see us leave.

Red Team Leader:

Red Leader has some
announcements to make before
the final concert.

The Red Team didn’t
mind, because we’re the team that
entertains ourselves.

All the JoCo Cruise performers in the second half of the last show:

Let’s bring Aimee Mann
back out to the stage to join
the Shitty Bar Band.

We now get into
the unrehearsed supergroup
section of the show.

JoCo Cruise hijinks

This is the last show,
unless we’re quarantined on
the ship for a while!

Half of those palettes
were 55-gallon drums
of caveat sauce.

This pun somehow leads
to a sad Happy Birthday
for Paul Sabourin.

Paul Sabourin points
out Kendra’s Glow Cloud dress in
the front row (all hail!)

They talk about why
they did note-for-note covers
instead of new takes.

Make It With You by
Bread, which has even better
string writing than Swift.

So by Friday night,
they’d written this musical
about JoCo Cruise.

A plan to take over the world:

Here’s how it’s going
to work: first we’re going to
have a nice dinner.

And once we have our
very own cruise ship, we shall
dominate the seas.

Some Truth:

An actual cake
which is not a lie. It was
delicious and moist.

It was delicious
and moist. This is Drew’s body
given up for us.

Questions and answers:

What do you do when
you reach the limits of your
own understanding?

When she reaches the
limits of her knowledge, she
says she doesn’t know.

the green people with
buttons who are aliens
wanting to probe you

Wash your hands! Do you
need to take your life jackets
to the safety drill?

What about water,
though? Where do you sign up for
the specialty lunch?

Calls to action

All this and more can
be real if you book yourself
a berth on that boat.

It was supported
by her Patreon patrons.
You could be one too!

If you want to hear
him sing more covers this way,
back this Kickstarter:

That will do for now. Next perhaps I’ll make word clouds of my YouTube descriptions from various time periods, to show what I was uploading at the time. Or perhaps I’ll feed the descriptions into the app I wrote to create the data for rhyme.science, see what the most common rhymes are, and write a poem about them, as I did with Last Chance to See.

Eventually, some of the content I create from my YouTube metadata will make it into a YouTube video of its own — perhaps finally a real channel trailer. But what will I write in the description and title, and will I have to calculate the steady state of a Markov chain to make sure it doesn’t affect the data it shows?

 

, , , , , , ,

Leave a comment

Some Statistics About My Ridiculous YouTube Channel


I’ve developed a bit of a habit of recording entire concerts of musicians who don’t mindGraph their concerts being recorded, splitting them into individual songs, and uploading them to my YouTube channel with copious notes in the video descriptions. My first upload was, appropriately, the band featured in the first image on the web, Les Horribles Cernettes, singing Big Bang. I first got enough camera batteries and SD cards to record entire concerts for the K’s Choice comeback concert in Dranouter in 2009, though the playlist is short, so perhaps I didn’t actually record that entire show.

I’ve also developed a habit of going on a week-long cruise packed with about 25 days of entertainment every year, and recording 30 or so hours of that entertainment. So my YouTube channel is getting a bit ridiculous. I currently have 2723 publicly-visible videos on my channel, and 2906 total videos — the other 183 are private or unlisted, either because they’re open mic or karaoke performances from JoCo Cruise and I’m not sure I have the performer’s permission to post them, or they’re official performances that we were requested to only share with people that were there.

I’ve been wondering just how much I’ve written in my sometimes-overly-verbose video descriptions over the years, and the only way I found to download all that metadata was using the YouTube API. I tested it out by putting a URL with the right parameters in a web browser, but it’s only possible to get the data for up to 50 videos at a time, so it was clear I’d have to write some code to do it.

Late Friday evening, after uploading my last video from JoCo Cruise 2020, I set to writing a document-based CoreData SwiftUI app to download all that data. I know my way around CoreData and downloading and parsing JSON in Swift, but haven’t had many chances to try out SwiftUI, so this was a way I could quickly get the information I wanted while still learning something. I decided to only get the public videos, since that doesn’t need authentication (indeed, I had already tried it in a web browser), so it’s a bit simpler.

By about 3a.m, I had all the data, stored in a document and displayed rather simply in my app. Perhaps that was my cue to go to bed, but I was too curious. So I quickly added some code to export all the video descriptions in one text file and all the video titles in another. I had planned to count the words within the app (using enumerateSubstrings byWords or enumerateTags, of course… we’re not savages! As a linguist I know that counting words is more complicated than counting spaces.) but it was getting late and I knew I wanted the full text for other things, so I just exported the text and opened it in Pages. The verdict:

  • 2723 public videos
  • 33 465 words in video titles
  • 303 839 words in video descriptions

The next day, I wanted to create some word clouds with the data, but all the URLs in the video descriptions got in the way. I quite often link to the playlists each video is in, related videos, and where to purchase the songs being played. I added some code to remove links (using stringByReplacingMatches with an NSDataDetector with the link type, because we’re not savages! As an internet person I know that links are more complicated than any regex I’d write.) I found that Pages counts URLs as having quite a few words, so the final count is:

  • At least 4 633 links (this is just by searching for ‘http’ in the original video descriptions, like a savage, so might not match every link)
  • 267 567 words in video descriptions, once links are removed. I could almost win NaNoWriMo with the links from my video descriptions alone.

I then had my app export the publish dates of all the videos, imported them into Numbers, and created the histogram shown above. I actually learnt quite a bit about Numbers in the process, so that’s a bonus. I’ll probably do a deeper dive into the upload frequency later, with word clouds broken down by time period to show what I was uploading at any given time, but for now, here are some facts:

  • The single day when I uploaded the most publicly-visible videos was 25 December 2017, when I uploaded 34 videos — a K’s Choice concert and a Burning Hell concert in Vienna earlier that year. I’m guessing I didn’t have company for Christmas, so I just got to hang out at home watching concerts and eating inexpertly-roasted potatoes.
  • The month when I uploaded the most publicly-visible videos was April 2019. This makes sense, as I was unemployed at the time, and got back from JoCo Cruise on March 26.

So, onto the word clouds I cleaned up that data to make. I created them on wordclouds.com, because wordle has rather stagnated. Most of my video titles mention the artist name and concert venue and date, so some words end up being extremely common. This huge variation in word frequency meant I had to reduce the size from 0 all the way to -79 in order for it to be able to fit common words such as ‘Jonathan’. Wordclouds lets you choose the shape of the final word cloud, but at that scale, it ends up as the intersection of a diamond with the chosen shape, so the shape doesn’t end up being recognisable. Here it is, then, as a diamond:

titles

The video descriptions didn’t have as much variation between word frequencies, so I only had to reduce it to size -45 to fit both ‘Jonathan’ and ‘Coulton’ in it. I still don’t know whether there are other common words that didn’t fit, because the site doesn’t show that information until it’s finished, and there are so many different words that it’s still busy drawing the word cloud. Luckily I could download an image of it before that finished. Anyway, at size -45, the ‘camera’ shape I’d hoped to use isn’t quite recognisable, but I did manage a decent ‘YouTube play button’ word cloud:

descriptions

One weird fact I noticed is that I mention Paul Sabourin of Paul and Storm in video descriptions about 40% more often than I mention Storm DiCostanzo, and I include his last name three times as much. To rectify this, I wrote a song mentioning Storm’s last name a lot, to be sung to the tune of ‘Hallelujah’, because that’s what we do:

We’d like to sing of Paul and Storm.
It’s Paul we love to see perform.
The other member’s name’s the one that scans though.
So here’s to he who plays guitar;
let’s all sing out a thankful ‘Arrr!’
for Paul and Storm’s own Greg “Storm” DiCostanzo!
DiCostanzo, DiCostanzo, DiCostanzo, DiCostanzo

I’m sure I’ll download more data from the API, do some more analysis, and mine the text for haiku (if Haiku Detector even still runs — it’s been a while since I touched it!) later, but that’s enough for now!

, , , , , , , , ,

Leave a comment

Three more Hallelujahs


You might have noticed that Joey and I have been writing original songs and new versions of existing songs set to the tune of Leonard Cohen’s Hallelujah. Here’s a playlist of 24 Hallelujah videos we’ve recorded so far (including one of Joey singing part of the original in a choir.) We have many more lyrics waiting to be sung. We started writing these after getting the song stuck in our heads from hearing Beth Kinderman’s ‘Stop Covering Hallelujah‘ at MarsCon 2019. The day after that MarsCon we went to the biggest ball of twine in Minnesota, in formalwear, because it’s a ball.

Byron wearing a black hat, black jacket with white shirt and red tie, and khaki pants, me wearing a long black dress and a tiara, and Joey wearing a black suit with a white shirt, all standing in front of a giant twine ball, seen through the glass of a pagoda. There is much snow on the ground.

While talking to our hitchhiker ‘Bernie’ (actually Byron) back at the MarsCon hotel, we realised that ‘Minnesota’ scans to ‘Hallelujah’, so I decided to write a Hallelujah version of Weird Al’s song, The Biggest Ball of Twine in Minnesota. I did so a few days after JoCo Cruise 2019 ended.

At MarsCon 2020, we found ourselves again in the song circle at Beth’s Space Oddity room party, so I convinced Joey to sing the Biggest Ball of Twine Hallelujah, but then I was unexpectedly recruited to sing a verse, which I think I did terribly, and then we skipped the last few. Here’s that performance:

And here are the full lyrics:

I had two weeks vacation due
From Big Roy’s Heating, Pipes and Flue
Asked kids at dinner where they’d like to go to
They made their choice as noodles twirled
Of anywhere in this great big world
The biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

Next day we loaded up the car
With wieners, taters, rhubarb pie
And rolled out in our 53 DeSoto
Picked up a guy as children fussed
His sign had said “Twine ball or bust”
The biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

We could not wait to see the twine
We only stopped when we were buyin’
More wieners and a diet chocolate soda
We sang for the 27th time that day
When we saw a sign that showed the way
To the biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

As sun was setting in the sky
Before our unbelieving eyes
A shrine beneath a makeshift twine pagoda
To see that huge majestic sphere
I had to pop myself a beer
the biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

Just who’s he trying to impress
There’s no bridge guiding to a guess
O, Twine Ball Man it seems we hardly knew ya
It’s a strange and what-on-earthly thing
Some twenty one thousand pounds of string
It’s a twisted and a ballsy hallelujah
hardly knew ya, Hallelujah, hardly knew ya, hallelujah.

I wept with joy before the ball
I bet if we unrolled it all
It’d reach right out to Fargo, North Dakota
“That’s what our country’s all about”
But then the henchmen threw us out
Of the biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

We slept a night at Twine Ball Inn
Next morning, headed home again
But I can’t think where else I’d rather go to
We didn’t want to leave; that’s clear
I think that we’ll be back next year
At the biggest ball of twine in Minnesota
Minnesota, Minnesota, Minnesota, Minnesota

When Beth Kinderman played her song in concert later at MarsCon, she flattered Joey and me with a special dispensation to continue singing Hallelujah.

A few days after I got back home, it was Joey’s birthday, so I sang a birthday Hallelujah I’d been planning ever since my own birthday. I used Joey’s Sore Throat Hallelujah as a backing track, simply by playing it on my iPad while I sang. I think I did a better job on this one, but still felt pretty uncomfortable with the high notes:

Lyrics:

Today’s the day we celebrate
recurrence of a great first date;
it’s Joey-left-the-womb-and-came-to-Earth day
and made it better than before;
I hope you’ll stay for many more,
so I can keep on singing happy birthday.

Now, four days into JoCo Cruise, COVID-19 was declared to be a pandemic, so by the time I got home, social distancing, quarantine, and self-isolation was the hot new thing. I got enough groceries to survive and then stayed strictly inside my apartment for 14 days to make sure I hadn’t picked anything up on the cruise or in the four airports I travelled through afterward.

I also wrote lyrics for an ‘isolation’ Hallelujah. But Joey had seen my birthday Hallelujah, and somehow become convinced that I could sing Hallelujahs all by myself. So we worked out a key I was more comfortable singing it in (A, in particular) and instead of singing it for me, Joey sent a backing track in that key and got me to do it myself. I happened to record it while still in costume from an online open mic I’d participated in, so at least nobody will know it was me if I sang badly.

Lyrics:

It follows a logistic curve.
It’s serious, and we observe
a median of five-day incubation,
so even if you’re symptom-free,
and so are all the folks you see,
please stay home if you can in isolation.
Isolation, isolation, isolation, isolation.

Since then, I’ve been uploading more videos from JoCo Cruise — I’ve just about finished uploading the entire land concert at Santo Domingo. I performed a few other things on the cruise (and one other song at MarsCon) but I’ll post about them when all the relevant videos are up.

, , , , , , , , , , , ,

Leave a comment

May the Fourth Be With You


I’ve published both of these things before, but not both on May the Fourth. Here’s a video of the poem that I wrote about Star Wars before I saw it, along with a wrap-up of what I thought about the poem after seeing Star Wars:

And here’s a musical version of that poem, set to music and sung by Joey Marianer:

I’ve just noticed that the automatically-generated closed captions on that one say ‘sorry Bingley Lloyd’ instead of ‘stars were being made’, which is hilarious, but if you’re hard of hearing you’d be better off reading the text of the poem here instead. I don’t think I’ve added proper closed captions to my video of it either yet, sorry; I should have thought about this before today.

May the force be with Peter Mayhew always.

, , , , , ,

Leave a comment

%d bloggers like this: