Using Docker to run Manim in Jupyter notebooks

If you want to develop Manim animations easily the a Jupyter notebook makes as great working environment. However it can be fiddly to get set up.

Luckily there is a pre-built Docker image that can get you up and running with two simple commands:

cd <your working directory>
docker run --rm -it -p 8888:8888 -v "%cd%:/manim" manimcommunity/manim jupyter lab --ip=0.0.0.0

In the console you will then see a URL looking something like http://127.0.0.1:8888/?token=xxxxxxx. Cut and paste this into your browser and you are away laughing.

For Linux you will need to tweak the %cd% bit and use pwd instead so:

cd <your working directory>
docker run --rm -it -p 8888:8888 -v "$(pwd):/manim" manimcommunity/manim jupyter lab --ip=0.0.0.0

These commands mount the current directory into the docker container, so any notebooks or videos you create will be saved into that directory and won’t be lost when you stop the container.

Finally to create an animation put the following code into two separate cells in a new notebook:

from manim import *

and

%%manim SquareToCircle

class SquareToCircle(Scene):
    def construct(self):
        circle = Circle()  # create a circle
        circle.set_fill(PINK, opacity=0.5)  # set color and transparency

        square = Square()  # create a square
        square.rotate(PI / 4)  # rotate a certain amount

        self.play(Create(square))  # animate the creation of the square
        self.play(Transform(square, circle))  # interpolate the square into the circle
        self.play(FadeOut(square))  # fade out animation

When you run the notebook you will see the rendered animation (and it will be saved into a media sub-directory). The %%manim line is the magic instruction to trigger Manim to render the specified scene.

Using Docker to run Jupyter notebooks

If you are messing with data then, you can’t beat a Jupyter notebook as a working environment. Not only are they super easy to use but they are effective ways of sharing your workings and findings.

Even better, the Jupyter project have a created a bunch of Docker containers that contain all the tooling you need which, makes getting up and running as simple as running a two simple commands:

cd <your working directory>
docker run --rm -it -p 8888:8888 -v %cd%:/home/jovyan/work jupyter/datascience-notebook

In the console you will then see a URL looking something like http://127.0.0.1:8888/?token=xxxxxxx. Cut and paste this into your browser and you are away laughing.

For Linux you will need to tweak the %cd% bit and use pwd instead so:

cd <your working directory>
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan/work jupyter/datascience-notebook

If you need to sudo to install something you will need to run the container as root and set the GRANT_SUDO environment variable:

docker run --rm -it -p 8888:8888 -u root -e GRANT_SUDO="yes" -v %cd%:/home/jovyan/work jupyter/datascience-notebook

Lastly the jupyter/datascience-notebook container generally has all you need but if you are doing something specfic, there are a bunch of other options, including ones that contain R, Tensorflow, Scipy and Apache Spark support. See https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html for more info

Right-sizing User Stories

I was asked by a customer to give a talk about right-sizing user stories. Below is the summary I gave them. This is by no means an original bit of thinking but ideas pulled from a number of sources (1, 2), however it made a nice little summary so I thought I’d post it.

Form

A User Story typically takes the form of:

As <type of user>
I want <some goal>
So that <some reason>

So what is the right size ?

There is no right answer !

It pretty much depends on the team. In particular their skill levels, the process they follow and their domain knowledge. All these factors impact the size of a User Story that is right for the team.

A well oiled team, working in a domain they know inside and out can consume and deliver much larger User Stories with ease. A newbie team with little domain knowledge is going to require much finer sized (and easier to consume) User Stories.

Rules of thumb

However, there some rules of thumb that can help you find the right size of User Story for your team:

  • small enough to be understood by the team and be implemented by the team in a short space of time
  • big enough to represent business value in its own right
  • big enough to deliver on its own

A User Story is NOT…

  • A task (e.g. a small bit of work that has no standalone business value)
  • A requirement

Instead a User Story:

  • Groups a set of tasks to be done (which can be used for bottom up estimation if need be)
  • Groups a set of requirements (ideally defined as acceptance criteria)

A User Story is “Done” when:

  • All the tasks have been completed
  • All the acceptance criteria have been met

Some examples

“As a user, I want a new system because the old one no longer meets my needs” is too big (probably even too big as an Epic)

“As a user, I want to register, login and manage my details online” is still too big. It should be at least three User Stories, covering logging in, registering and managing details.

For most teams even three would be too coarse. Logging in could itself be broken down into three finer grained User Stories:

  • “As a user, I want to log in, so that I can access my private information”
  • “As a user, I want to reset my forgotten password, so that I can login”
  • “As a user, I want the system to remember me, so that I don’t have to log in every time”

These are probably the size of User Story I’d suggest that most teams use. Each delivers a standalone bit of business value, can be easily understood & implemented and can be easily prioritized in a back log.

If you start to create features like “As a user, I want to enter my user ID” and “As a user, I want to push the login button”, then stop, you have gone too far !

Docker: Cleaning up after yourself

I’ve been doing a lot of work with Docker of late mostly for creating easy dev and test environments. One of the problems you come across when doing this, is that you land up with all sorts of orphaned images and containers which chew up disk space and systems resources.

The following commands will give you a clean slate. Stopping all running containers, removing them and deleting any images.

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)

Obviously take care when running these commands, you don’t want to nuke something important :)

Google IO 2016 Highlights

Last month Google held it’s annual Google IO conference. Not really a surprise but it was chock full off conversation and AI based tech which is all the buzz of late.

Three bits in particular stood out for me.

The first is the demo of Allo, their new chat app. The app itself wasn’t particularly groundbreaking but the embedding of their new digital assistant tech (the Google assistant) is very interesting and it really shows where Google thinks things are going. The fact that 3rd parties will be able to hook into the Google assistant ecosystem, as shown with the Open Table integration at the end of the clip, should have organizations interested in conversational commerce buzzing.

Here is the clip:

The second demo was for Google Home, their answer to the Amazon Echo. I suspect there was a bit of hand waving going on in this video clip but having done some work with the Echo, everything they demo should be doable. The one thing that did not ring true for me was how it was identifying specific family members so that it could perform actions in the context of their data and “profile”:

The last thing that stood out for me was the new Android Instant apps tech. It basically allows a sliver of your app to be downloaded and run, which means that your users don’t have to first install your app. They demoed apps been launched from web searches and from an NFC tag scan. Really starts to blur the difference between an app and a web page. Here is the clip from the key node:

All in all, some pretty interesting tech will be hitting us shortly.

How to create a private key for signing Android apps

To create a private key to sign your Android apps with, you need to run the keytool command (installed when you install a Java development kit) as follows.

keytool -genkey -v -keystore myandroid.keystore -alias myandroidkey -keyalg RSA -keysize 2048 -validity 10000 -dname "O=Acme Ltd"

Replace Acme Ltd with your company name.

When the tool runs it will prompt you for a password and then generate a file called myandroid.keystore. You will need this file, your password and the key’s alias (myandroidkey in this example) to sign your app.

To see the details of your key use the following command:

keytool -list -v -keystore myandroid.keystore

For more information see the Link to Signing Your Application page page in the Android developer’s documentation.