So the other day Phoenix and I were working on a project and I noticed that you could split comments into multiple parts using cut and paste. He mentioned that you might be able to make an animation using that, so I decided to use it to play the Bad Apple!! animation.

The contents of the doc don't change at all while the video is actually playing - it's just which comment is highlighted that's changing. You can give the document a try for yourself - but be warned, it takes ages to load:
The Google Doc.

A bit of context, I guess - Bad Apple!! is a shadow art animation featuring characters and music from the Touhou series. Similarly to Doom, it's become a quasi-meme for people to try to get it running on anything they can, as the shadow art is easily recognizable even on low-resolution monochrome makeshift displays. This isn't the first time its been featured in a project on Cemetech - Iambian's CEvidium was created to play it, and womp has made it for Noteman.

When I started this project, I thought it would be as simple as opening the Apps Script editor and calling a few functions to add comments. But apparently, Apps Script can't interact with comments at all, and neither can the Docs API. The Drive API looked promising, as it had a function for adding comments, but it doesn't allow you to create an anchor to connect it to the text, so that was also a no-go.

The next thing I tried was writing an Apps Script program that would take all of the characters for a given frame, move them to a single line, select them so a macro running on my machine could comment on them, and then move them back into place. This worked fine, except that it took over 20 seconds for each frame, and there was a quota of 90 minutes of runtime per day. At full framerate, it would take over a month's worth of days to convert the entire animation, so I gave up on that.

Then I realized that in order to be able to copy text with comments from one place in the doc to another, some information about the comment had to be stored on the clipboard. So, I took a look at the clipboard comments, and it turned out to be a doubly-JSON-encoded string. I wrote a tool that allows you to view and modify the clipboard data.

Comment data is stored as an array of objects, one per character, with a format like
  "cs_cids": {
    "cv": {
      "op": "set",
      "opValue": [
or null, if unchanged from the previous character. This seems somewhat wasteful, as this would be the perfect opportunity to use a run-length encoding. Instead, for each character you add to the copied string, the size increases by five bytes.
Anyways, kix.gwil2hwplzg9 is the ID for the anchor for the comment. "kix" seems to be the internal name for Google Docs - tons of stuff is namespaced with that. There's no data about the contents of the comment itself, only the anchor.

I tried replacing the anchor ID in the clipboard data with one from another document, and it wasn't preserved when I copied the text again. I also tried adding a comment that used the same invalid anchor using the Drive API and pasting the text again. The comment itself kept the invalid anchor ID, but the text did not. So, I'm guessing that the ID only serves as a reference to an anchor, rather than directly acting as one.

So, I still needed one anchor per frame, but couldn't find an automatic way of generating them. So, I just set up a macro to hit Ctrl+Alt+M to add a comment over and over again. I used a increasing sequence of numbers for the comments. This worked fine in my initial tests, so I made a converter that would take a sequence of comments and a video file encoded with monob raw video using ffmpeg, and convert it into an animation.

This didn't really scale, though - after about 1500 comments, the interface would occasionally hang for several seconds, resulting in characters with more than one comment, or a number getting inserted as text rather than as a comment. I tried cranking up the delay, but this never really worked.

At that point, I basically gave up on having consecutive comment numbers, and just used the Drive API to nab a list of anchors in the document. I had around 4000 of them, enough for about 2/3 of the video at 30 FPS. I ran them through the converter and pasted it into the doc, and it worked! The only issue was, the sync timed out, so I was unable to actually upload the document.

I decided to dial it down to 10 FPS, which only needed 2000 comments. It did actually upload this time, but it still was extremely slow to load and use. It takes over a minute to load the document for me, the page uses half a gig of RAM, and it takes over a second to switch which comment is displayed.

But yeah, if you speed the video up it works fine. Thanks to fghsgh for recording the video for me, since it takes like two hours and makes the computer basically unusable in the meantime. I tried recording myself but OBS crashed for some reason.

Here's the repo with all the code I ended up actually using:
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum