Here are all the webinars given by David Rutten, the creator of Grasshopper:
Introduction to Grasshopper with David Rutten:
David Rutten’s Introduction to Grasshopper Webinar:
Advanced Topics in Grasshopper:
You can also access Rhino tutorials on their Vimeo Channel (https://vimeo.com/rhino). You will see amazing tutorials such as an intro to the Scan & Solve structural analysis (Michael Clarke wrote a post on it previously: https://wewanttolearn.wordpress.com/2012/11/08/scan-and-solve-for-rhino/):
Welcome to Freeform Modeling In Rhino
After posting about the book explaining basic concepts of computational design, The Nature of Code by Daniel Shiffman, I thought it would be helpful to convert all the example into Grasshopper files. Well here you go: Jake Hebbert has done it on youtube, exciting tutorials using python for Grasshopper. Here are couple example of tutorials extracted from Jake’s youtube channel:
Gravity between movers:
Couple pictures from our latest tutorials and crits. Thank you to Tommaso Franzolini, Miriam Dall’Igna, Colin Ball, Karl Kjelstrup-Johnson, Magnus Larsson, Jack Munro and Savvas Havatzias for the very helpful comments on our last cross-crit.
Jake Alsop’s Temple for Bees
Josh Haywood’s Prism Prison
Andrei Jippa’s fractals
Andrei Jippa 3d printed fractal
Michael Clarke’s Reciprocal Structure
Michael Clarke’s Reciprocal Structure
Here is the book that I kept mentionning in the tutorial: The Nature of Code by Daniel Schiffman
The book explains many algorithm that attempt to reproduce natural systems (including swarms and fractals) using Processing, the java-based scripting interface.
You can download the book and make a donation or buy the hard copy. Try some examples, register to the Processing forum and to StackOverflow.com. Ask for help on the Processing IRC Channel.
For some example, you will need to need to download the Toxiclibs library and you might want to use the Eclipse IDE to speed up your workflow. You can also follow the great Plethora-Project.com tutorials by Jose Sanchez.
Below is the best tutorials I found so far to learn Revit Architecture (it goes from 1-17), here is the youtube channel.
Also to link Grasshopper with BIM tools such as Revit, Vasari or Digital Project, have a look at the Autodesk Webinar series on that topic. Geometry Gym uses the IFC OpenBIM data model to export families from Grasshopper to Revit. This allows to have items such as walls or slabs with materials…etc… imported as such on Revit or Digital Project or any BIM software. The other plugin focus on “Adaptive Components” from Grasshopper to Revit. Chameleon allows to bring back models to Grasshopper from Revit which is useful when doing simulations.
Jon Mirtschin’s Geometry Gym IFC importer:
Hiroshi Jacobs’ Chameleon:
Tim Meador’s Hummingbird:
Nathan Miller’s OpenNurbs Import:
All these initiatives are discussed in the Grasshopper group on the forum. Vasari has a very similar forum than the Grasshopper on. Have look: http://autodeskvasari.com/forum
Autodesk’s products are free for students, download them here: http://students.autodesk.com/
Follow the link to the Autodesk 123D Viewer to view the native scan
Last year we saw Jack Munro making use of the DAVID 3D Laser Scanner software to scan his sand cast. Although home-made, this scanner requires a laser and about £300 of software.
Autodesk 123D Catch is a free piece of software that allows the user to create a 3D scan [imports into Rhino as a mesh] using only a set of normal photos. It is also available as an app for the iPhone meaning that you can take the photos and upload them at the same time. All of the image processing is done on Autodesk’s servers.
The video above uses the LinceoRV augmented reality software to show a comparison between the real object and its digital mesh counterpart.
The resolution depends on the number of photos, position of photos, background, lighting conditions, etc, but the scan depicted appears to have a resolution of about ±2mm on a 120mm wide object.
While trying to figure out useful ways to interact with some wire frame models of 3D Harmonographs, I started exploring some examples of augmented reality software that allows a 3D mesh model to be tracked to a physical marker. The two pieces of software experimented with were the AR Plugin for Autodesk Showcase and LinceoRV. Both are stand-alone render/presentation engines with an augmented reality mode.
I found the Showcase AR Plugin to work well with pre-recorded footage, but not accept my live webcam feed, and LinceoRV to work much better with the live feed, but be more limited on the types of marker that it accepted. Both pieces of software can handle multiple markers.
The software basically analyses a binary [black and white] feed from the film, recognises the marker symbols, and works out their distortion due to perspective. It then uses this distortion to accurately recreate the camera position in relation to the digital model.
Using the LinceoRV software could be an useful way to present/manipulate 3D models that are too challenging/costly to print.