For my first semester here at the Media Lab, I am taking the "notorious" How to Make (Almost) Anything class, and, as one can easily conjecture, the first step to making almost anything is almost always designing it.
And my methodology worked reasonably well: I was able to produce quite a few projects that both myself and many others find interesting. So, if you're like me, and prefer solving computational geometry problems to clicking and dragging GUI's, or perhaps just enjoy trying out something new, I'd like to share a few tricks and workflows I've been using along the way.
Just like how realizing that the computer is a bunch of bytes that you move around suddenly makes system programming simple, realizing that a design is just a bunch of numbers has a similar effect to computationally generating one. When you use a CAD software, the software generates the numbers for you; When you code, you generate the numbers yourself. Take the cube for example. A cube has eight vertices, each vertex has X,Y,Z coordinates. Just 24 numbers! Assuming the cube is centered at origin and has side length of 2, we can write out the vertices as:
When you place a cube primitive in a CAD software, somewhere, in some format, these numbers are stored. But there is no reason we can't write these numbers ourself. All the complicated stuff we'll see later resolves to this: finding out the numbers that describe the shape and writing them down.
Designing with code gives you a free bonus, which is that everything is incredibly parametric. Just stick the things you ever want to change in variables or arguments. So say for the cube, you'd like the width, height and depth to be parametric:
Now you can call the function with different parameters to add differently sized cubes to your design.
Now that we have the numbers in the RAM, how do we export them so that our machines (e.g. laser cutters, mills, 3d printers) can understand and execute?
In the end those machines probably want what's called "toolpaths", low-level instructions describing how the end effectors should move -- but there usually exist softwares for most of these machines that conveniently generate toolpaths from common 2D or 3D formats, so we can work in the high level.
One widely accepted format for 2D vector graphics is the SVG. It is human-readable and ridiculously easy to generate, which is why I use it for all my projects of this sort. The ~10 line function below takes in width and height of canvas, and an array of polylines (each of which is an array of vertices), and writes out the string containing the SVG.
Now you can open your file in Adobe illustrator, Inkscape, or even a web browser.
For 3D output, STL is very common and just as easy to generate. However, it is slightly bloated due to the way it stores triangles. Here's the simple code that generates the ASCII version:
Now you can open your model in your favorite 3D software. Note that the file size gets quite big when there're a lot of triangles. Here's the code to generate the binary version, which is a lot more compact:
Besides STL, OBJ is also one of my favorite easily-generatable formats.
Believe it or not, now we have everything we need to generate a 2D parametric design. For example, in this project, I designed a laser-cut press-fit construction kit by describing the outline of every piece with code. There're quite a few pieces, and you can read the full source code. But here, let me show the basic flow of a program that can generalize to any press-fit projects:
As you might have noticed the previous example, it is sometimes hard to wrap our minds around directly enumerating the vertices. For simple shapes like rectangles and slits, it is perhaps manageable, but when it comes to complex geometries, or shapes that are unions, intersections or difference of multiple sub-shapes, it starts to hurt the brain. Of course it is still doable to generate everything in vector (and extra cool if you manage it from scratch; there're also libraries that do it for you), but working with raster makes our life much easier.
Say if you're generating the same piece as the previous example. Using the canvas API, you can simply write:
Notice that instead of finding out the subtraction of the slit from the rectangle in our brain, by a clever use of the foreground color (white) and background color (black), we pass the task onto the trusty rendering engine. Imagine when the shapes are a hundred times more complex: this surely saves us a lot of time.
Though convenient the raster operations are, in the end, we still need a vector output: remember that the machines need to know the path to move along; showing them a "photograph" of what you think the result should look like is not good enough.
So adding these lines to our previous example:
Again we arrive at an SVG output. However, there is one catch: the smallest unit in a raster is one pixel, below that, there is no accuracy. So, you need to pick a very small physical unit to correspond to one pixel. You might have heard of the phrase DPI (dots per inch) or PPI (pixels per inch), it's the same idea. Figure out the specs of your machine (or whatever precision your design requires): say if it needs to be accurate to one mil, make one pixel correspond to 1/2 mil (or 2000 DPI) to be safe.
It might also be a good idea to make the resolution a parameter. When developing and debugging the program, use a low resolution to speed up rendering times; For export, go as high as you can.
In this project, where I made a life-sized rickshaw, I based my workflow entirely on this raster-to-vector approach.
When generating construction kits like my brush hanger and rickshaw, sometimes it is hard to visualize how the assembled object will look like in our brain; Indeed we might even make errors and create "impossible" designs.
One way is of course defining the 3D pose of each piece in code, and have the program render or even animate the assembly. Another perhaps lazier way is to piggyback the capability of existing 3D softwares: you can export the pieces, cut out from the board and extruded to correct thickness, and assemble them in your favorite 3D software.
Note that the polylines data is programmatically injected into the ruby script, and the ruby script is written to the folder where SketchUp finds user extensions.
A bit hacky I admit, but this is just an example; I'm sure your favorite 3D software has better way of achieving this.
With 3-axis CNC, you can machine 3D designs -- almost. The machine won't be able to reach underneath or attack the material from sideways, so you cannot get overhangs or undercuts. In other words, Z is strictly a function of X and Y: at a given coordinate on the bottom plane, there can only be one corresponding point on the surface.
For these kind of situations, our "pseudo-3D" designs can be generated as a 2D "depthmap" (or "heightmap", depending on your perspective): We can store the coordinate on the 3rd axis as the grayscale value of each pixel.
The advantage is multifold, since 2D is much "easier" than 3D: Instead of say finding out how to overlay an embossed design on a curved surface in 3D, we can simply superimpose the depthmaps using blend modes; Instead of figuring out how to construct triangles into meshes, we can simply "paint" the complex geometries on a "canvas". Moreover, 2D images are more easily previewed than 3D models.
After we're done with the depthmap, we can convert it into a 3D model, that other 3D softwares, such as a CAM tool needed to generate the toolpaths, can understand. Here's a simple piece of code that does that:
The code generates an array of triangles (each containing 3 coordinates), fit for writing to STL's. You can directly pass the output to `to_stl` and `to_stl_bin` we introduced earlier.
As to creating the design as a depthmap, there're a couple of handy tricks. Firstly, the distance transform.
For each pixel on a binary image of a silhouette (of anything), the distance transform finds out the distance from that pixel to the closest point on the outline of the silhouette.
This becomes massively useful if you're generating an object that's thick in the middle, and thinner near the edges, which is pretty much how most organic things (roughly) look like.
If the "distance" in distance transform is defined as Euclidean distance (length of the straight line that connects the two points), the resultant gradient is usually linear, and creates a uniform slope when converted to 3D. We can easily apply a function to the output, to bend the surface into any shape we want. Here's the code to run distance transform on a canvas, normalize it to 0.0-1.0 range, apply a function on top, and writes back to (mutate) the original canvas:
For example, to create a more "spherical" shape (usually looks better), try:
Another important trick is to use blend modes. If you've used PhotoShop, or worked a bit with 2D graphics, you might be familiar with the concept.
For depthmaps, the default blend mode (called "default", "normal", or "blend" in different softwares) mushes two maps together to create a "middleground" between the two; the "lightest" blendmode (sometimes called "lighten", not to be confused with "lighter" in web canvas API, which is another blend mode) creates a "union" of the depthmaps, as it retains the higher value for each pixel; The "additive" blend mode (called "lighter" in web canvas API and "linear dodge" in PhotoShop) overlays the pattern of one depthmap onto the surface of another.
Finally, recall that for raster images, pixel is the smallest unit, so you'll need to generate at a high resolution for the output model to have desired precision for fabrication. If you're using a "regular" image as the container, only 255 different depths can be represented (plenty for many situations); this can be easily fixed if you need more, by using a floating point image or a more advanced image format.
You can read more about my molding and casting project here, which uses this 2.5D depthmap technique to generate a design CNC'ed out of machinable wax.
Now that we've seen how to generate 2D and 2.5D designs, it's time to "upgrade" to 3D. However, since 3D geometry can be complex, and can take virtually any shape depending on your project, it becomes hard to describe a "one-size-fits-all" methodology. But there are some universal concepts or building blocks that I find useful.
As mentioned before, a common way to represent 3D geometry is by listing all the triangles, tiny or big, that compose the surface of the object. A square is two triangles; a circle is many skinny circular sectors (roughly approximated by triangles), a cube is six squares hence twelve triangles; The Stanford bunny scan has 69,451 tiny triangles.
While sometimes it is easier to generate the triangles directly (e.g. with patterns involving ellipses, stars, or regular polygons), often times one might find quads easier to work with. Quads are basically a combination of two triangles (think of it as a distorted rectangle), and it allows us to think more comfortably in a lattice or grid-based system. After we've "built" a quad in our minds, we can write down its triangular decomposition (`ABCD` becomes `ABC` and `ACD`).
Take this simple code for generating a cylinder for example:
You can see that in the code, I first generated two circles (`circ0` and `circ1`), which are outlines of the top and bottom surfaces (not really necessary for simple primitives like this, just as an example), and then iterated over them to find the quads that makes up the "walls" of the cylinder. Finally, I write down each quad as two triangles.
You can see a common pattern I like to use: first generate the "spines" or "outlines" or key curves that defines the shape of an object, then put "clothes' onto this "skeleton" by putting in quads on a grid, and finally converting the quads to triangles.
From the simple code for a cylinder, we can easily create, say a "wobbly tube", by "stacking" multiple cylinders with different radii at each joint.
This wobbly tube model gets you surprisingly far. Think about all the (genus zero) objects in the world: they can be approximated as a morph of the wobbly tube, so long as you provide the shape at each cross-section.
In most 3D softwares, you can move around, scale and rotate objects. The first two operations are easy to reproduce in our code: to translate, just add the offset to the coordinates of each vertex; to scale, just multiply the coordinates of each vertex by a factor. However, 3D rotation turns out to be a much larger headache.
There're multiple ways to represent 3D rotations. For example, quaternions is one of those things that every one says you should use, but few actually understand. For me, I like to use transformation matrices, for not only can it express rotation, it also provides one unified "interface" to represent translation, rotation and scaling, as well as shearing, reflection and other nasty (linear) things you can do to your 3D object.
You can think of transformation matrices as a list of "commands". Multiplying a matrix by a vector applies the encoded transformation to the vector, and produces a new vector; If you then multiply another matrix with the new vector, you'll apply another transformation; If matrices are multiplied in different order, the result will likely be different; What's especially beautiful is that, you can first multiply multiple matrices together into one matrix, which will contain all the transformations, and then multiply that with a vector to apply all of them at once. You can send this "master" matrix (i.e. 16 numbers) to your friend, and they'll know exactly how you transformed your object!
You can read a lot more about transformation matrices and rotation matrices on Wikipedia.
I have a piece of matrix math code that I ported to many languages, which I always copy-paste into whatever 3D project I'm working on. It features an unrolled 4x4 matrix multiplication and applying them to 3D vectors automatically taking care of homogeneous coordinates.
These are the basic things that can get one kick started on generating 3D objects, but there're so many more fun stuff. You can read about how I generated a decorated puzzle ball with multiple layers and 3D printed it here.
After generating the key structures of my design, I often feel compelled to add some patterns to decorate the surfaces.
There're countless type of patterns, and even more algorithms to generate them. Therefore I'll simply present a couple simple ones I've played with recently, then introduce a method to ensure a generated pattern is machine-able.
The "shattered ceramic" pattern is a typical one in classical Chinese visual language. At the first look, it looks like a Voronoi diagram. However upon closer inspection, these patterns have sharper corners and are less like rounded "cells". I simulated it with an algorithm involving line intersection and recursive growing. You can find the source code here (circa line 437).
This pattern above I name "swirly nothings". They look decorative but are in fact just a bunch of swirls stuck together. It uses poisson disk sampling to pack circles, and develop the swirls from the circles. You can find the source code here (circa line 481).
This "flowers, leaves, and vines" pattern is generated in 3D. The vines "grow" using something akin to a maze generation algorithm, while the flowers and leaves are made using the quad mesh technique previously described. You can find the source code here.
Oftentimes the pattern is limited by the capability of the machine. I tend to produce extremely intricate patterns that the end effectors are too thick to fabricate. When this happens without my noticing, the design will look nice on my screen, yet the machined product will be unexpectedly rough. Therefore, I wrote some code to "preview" what a pattern would look like, if milled with an endmill of certain diameter.
The function modifies a web canvas context (binary image, white on black) in-place to a millable version.
With procedural designs, especially for decorative patterns we just mentioned, it is often nice to have some element of randomness. It saves us time of manual placements and makes each instance unique. While I can write another whole article (maybe book) about different types of randomness and noises, I'd like to introduce two pieces of code that I find myself copy-pasting into every project.
Here is a extremely minimalistic (and fast) seedable random generator, called `SHR3`, originally by George Marsaglia as a one-liner C macro:
That's it, you can even format it into one line in the spirit of the original:
Perlin noise is the bandaid/elixir to every procedural generation project. Stuff looks ugly? Add Perlin noise. Don't even know what to make? Start by playing with Perlin noise.
Simplified explanation: Perlin noise basically gives "smoothness" to your random numbers. Instead of having samples jump abruptly between any value within the range, neighboring Perlin noise samples are similar. This makes a better model for natural-looking textures and surfaces. Noises of multiple scales can be superimposed (called "octaves"), to create different levels of detail.
I use the implementation from p5.js:
Thanks for reading so far! I hope you're not bored to death. I shared quite a few techniques I found useful, but they're in no way exhaustive. Moreover, I find myself learning new things all the time, (and hence deprecating my old methods), so those described above might not be the best. If you think there's a better way to do something, please let me know!