After posting about the book explaining basic concepts of computational design, The Nature of Code by Daniel Shiffman, I thought it would be helpful to convert all the example into Grasshopper files. Well here you go: Jake Hebbert has done it on youtube, exciting tutorials using python for Grasshopper. Here are couple example of tutorials extracted from Jake’s youtube channel:
Here is the book that I kept mentionning in the tutorial: The Nature of Code by Daniel Schiffman
The book explains many algorithm that attempt to reproduce natural systems (including swarms and fractals) using Processing, the java-based scripting interface.
For some example, you will need to need to download the Toxiclibs library and you might want to use the Eclipse IDE to speed up your workflow. You can also follow the great Plethora-Project.com tutorials by Jose Sanchez.
Below is the best tutorials I found so far to learn Revit Architecture (it goes from 1-17), here is the youtube channel.
Also to link Grasshopper with BIM tools such as Revit, Vasari or Digital Project, have a look at the Autodesk Webinar series on that topic. Geometry Gym uses the IFC OpenBIM data model to export families from Grasshopper to Revit. This allows to have items such as walls or slabs with materials…etc… imported as such on Revit or Digital Project or any BIM software. The other plugin focus on “Adaptive Components” from Grasshopper to Revit. Chameleon allows to bring back models to Grasshopper from Revit which is useful when doing simulations.
Last year we saw Jack Munro making use of the DAVID 3D Laser Scanner software to scan his sand cast. Although home-made, this scanner requires a laser and about £300 of software.
Autodesk 123D Catch is a free piece of software that allows the user to create a 3D scan [imports into Rhino as a mesh] using only a set of normal photos. It is also available as an app for the iPhone meaning that you can take the photos and upload them at the same time. All of the image processing is done on Autodesk’s servers.
The video above uses the LinceoRVÂ augmented reality software to show a comparison between the real object and its digital mesh counterpart.
The resolution depends on the number of photos, position of photos, background, lighting conditions, etc, but the scan depicted appears to have a resolution of about ±2mm on a 120mm wide object.
While trying to figure out useful ways to interact with some wire frame models of 3D Harmonographs, I started exploring some examples of augmented reality software that allows a 3D mesh model to be tracked to a physical marker. The two pieces of software experimented with were the AR Plugin for Autodesk Showcase and LinceoRV. Both are stand-alone render/presentation engines with an augmented reality mode.
I found the Showcase AR Plugin to work well with pre-recorded footage, but not accept my live webcam feed, and LinceoRV to work much better with the live feed, but be more limited on the types of marker that it accepted. Both pieces of software can handle multiple markers.
The software basically analyses a binary [black and white] feed from the film, recognises the marker symbols, and works out their distortion due to perspective. It then uses this distortion to accurately recreate the camera position in relation to the digital model.
Using the LinceoRV software could be an useful way to present/manipulate 3D models that are too challenging/costly to print.