Insert Class Title as per Title Page

Scan to BIM: Point Clouds Reloaded

Kelly Cone - Beck Group

AB3057

Now that you have taken the Red Pill, you will need some training to be "the One" at your firm who understands how to work with all this point-cloud mumbo jumbo. Starting out in Autodesk® Revit® software, we will show you how to link point clouds and show you what the pitfalls are. We will show you how to manage the potential impact on view performance and the best practices and tools for modeling in your new virtual reality. We'll also introduce you to some plug-ins that let you do a lot more when modeling than Revit can do natively. We will bridge the paradox of either modeling with too much detail or creating a model that is too generic. And, finally, we'll take a quick spin in Autodesk® Navisworks® software to help you understand that sometimes you don't need to model everything (and even have you jump off a building or two while we're in there. Don't worry. You won't get hurt. We'll have collisions turned off). If you can make it through the training, you'll be ready for the revolution.

Learning Objectives

At the end of this class, you will be able to:

·  Understand file formats and point coloring options when working with scans

·  Link in massive point cloud files without destroying your model

·  Describe the various tools and techniques for using point cloud files to generate BIMs

·  Understand the advantages and pitfalls of simplifying the point cloud to a BIM

About the Speakers

Kelly Cone: Kelly joined the Beck Group as an architectural intern and has been focusing on technology and implementation of BIM practices and software since. In his current role as Innovations Director, he oversees the implementation of BIM technology and processes nationwide in our Architecture, Estimating, Real Estate, and Construction groups. He is responsible for ensuring that the tools and technologies employed further Beck’s commitment to integrating the design and construction disciplines, and advance our efforts at making our buildings more sustainable. Kelly is also deeply involved in the Building Information Modeling community. He has spoken at numerous national and international BIM conferences and blogs about the future of BIM and the Revit Platform on RevitFutures.blogspot.com

Class Outline:

We are going to walk through the key topics below in the time we have. Having worked on many scanning projects, we could probably talk as a group about each of these headings for 20 or 30 minutes a piece, but that wouldn’t give us a chance to get through them all. Hopefully we will have time at the end for some good Q&A so we can get into some more depth with any specific concerns or questions from the audience.

·  Intro (5M)

·  Getting Started with a Scan (15M)

·  Linking Scans into Revit (10M)

·  Working with Scans to Model Natively (10M)

·  Must-Have Plugins for Scan to BIM (15M)

·  To BIM or not to BIM (5M)

·  Scans in a coordination environment (15M)

·  Q&A (15M)

Introduction

So, if you were in my class last year at AU, this was the class I wanted to teach. However, due to scheduling I decided to do an introductory class last year. Unfortunately, we couldn’t get the description changed in time so it was a little confusing for some people in the audience. Don’t worry, this year everything lines up! For those who did not make it last year or are new to laser scanning and want some basic information, here is a link to a mind map from last year’s presentation:

There are some notable things to mention at AU regarding point clouds and Autodesk software. Autodesk has made an acquisition of Alice Labs, and I expect to see a lot of very positive changes in how all Autodesk’s AEC products work with point clouds in the near future.

Also, I want to throw out one big caveat. At Beck we only work with Leica scan data. That means all our experience with these tools is based on Leica formats like PTS and PTX. There are other formats supported by both Revit and Navisworks and some settings and options will be different. I believe that the same general rules we share here will apply to the other formats, but there is always something different. If you’ve got experience with another format and your work in that format has lead you to a different experience, please speak up! One thing I love about AU is that I usually learn a lot from the attendees in my class – it isn’t a one way street.

Getting Started with a Scan

So, you’ve got a scan back. Now what do you do with it? Get it into your software of course! Naturally, it isn’t as simple as clicking a button. What do you need to worry about? How do you do it?

Coordinates, coordinates, coordinates…

With scanning, you live and die by coordinates. Sometimes for very small scan jobs cloud to cloud registration and non-located scan files are acceptable. But, for any big scan job survey and control are going to make your scan files reliable, and that means a located scan. Whether you choose state plane coordinates or to instruct your scanning crew to use a local coordinate point (like a known grid intersection and elevation as the origin), defining a coordinate system that will work for your field and your BIMs is critical. Once everyone agrees upon what that is, you *should* be able to get your scans to come in to the right spot with no fudging.

Units

Units are another important discussion for scanning deliverables. For instance, did you know that there is an actual difference between survey feet and international feet. No, that’s not a set up for a joke. Back in the day, a “foot” was not precisely defined as the same everywhere in the world. In the US we defined a foot as 1200/3937 meters. Meanwhile, the British foot was something else, etc… In 1959 the world got together and agreed on a standard value (sort of) of 1 foot = .3048 meters. However, some bonehead decided that within the US any survey data expressed in feet for geodetic uses would continue to use the old definition. Unfortunately for us, this means that a survey foot is different than an international foot. One international foot equals .999998 US Survey Feet. I know it seems like a tiny difference, but size does matter (also not a setup for a joke). When you’re dealing with state plane coordinates (values in the millions of feet) this tiny difference can be a bust of quite a few feet. On one project, the difference between the two was over 40 feet. So, although the survey was correct, the shared coordinates in the project were correct, and the scan was correct, nothing lined up. So, units really matter.

Because of the confusion, I’d like to switch over to metric across the board permanently. (Please, no one throw and shoes at me unless they will fit my feet – Men’s size 12 by the way, well, size 12 US.) Since that is unlikely to happen, you need to be aware what you’re setting up your project in. Likewise, it would be nice if the software companies (ahem…Autodesk?) would allow us to define shared coordinates and import unit settings by feet AND US survey feet until we all go metric. If your software supports importing/exporting using metric values for scan data, I highly recommend using it since meters are the same any way and any where you slice them. Reliability is a wonderful thing.

Coloring

Point clouds have several options for coloring depending on the program you’re using. Natively, the scanner captures not only the X, Y, and Z, values of the data point, but also the intensity of the reflection of the laser. Technically, more than just this is recorded, but as it relates to point coloration it’s the intensity that matters. Additionally, you can map photographs taken by the scanner or by a higher end camera rig onto those points. From those data options, you can usually do a couple of things:

·  Color mapped Intensity (maps the intensity value to a wide color range)

·  Grayscale intensity (maps the intensity to a grayscale value)

·  Coloration based on the mapped photos

·  Color ranges based on X, Y, or Z values (maps the dimensional value to a color range)

·  The more advanced tools give you many more options, and some tools (like Revit and Navis) that are new to the game give you less.

Culling

Last but not least, culling is a very important topic given today’s technology. Three years ago, most scan data was acquired by time of flight based technology. This was relatively slow. The best scanners could capture a few hundred thousand points per second at best. Sounds like a lot, but not compared to phase based and waveform scanners we have today. We just purchased a new scanner (Leica P20) that captures a million points per second. A single five minute scan can approach 100 million points easily. So, what to do with all that data and how will my poor laptop handle it when I have 50 of those scans??? (Hint: It won’t.) So, while I never used to cull data that I brought into Navisworks for instance, I do cull it now.

Culling can be done a number of ways. I wish there were some more intelligent ways, but for now most culling falls under the headings below:

·  Noise culling – Getting rid of all those people, cars, birds, dogs, and other moving objects we don’t want to see. This is a manual process most of the time, but it gets rid of totally useless points so it is usually worth it.

·  Distance culling – This is done automatically to some extent by the scanner hardware as scan data gets to be out of the range within which the machine can return accurate data. You can also automatically apply additional range culling in most processing tools.

·  Dimensional culling – Want to get rid of any points 10 above the scanner or 5 feet below the scanner? You can do it. Dimensional culling is really useful when you know the data you need is at a certain location and the rest can be dumped.

·  Sequential culling – This is what most view based and import based culling is really about. It dumps X out of every Y number of points. Navisworks’ default setting is to keep 1 out of every 4 points (culls 3 out of every 4 points). This is a pretty stupid way to cull as it culls equally no matter how dense or how tight the scan data is. However, it’s what we’ve got most of the time.

·  Density culling – I desperately want this option in the software tools I have. Rather than cull based on point data, I want to cull based on point relationships. If I only need a point every 1 inch, I only want to cull points in areas where the next adjacent points are closer than ½” from the current point. This is much harder and far more resource intensive to do, so most tools do not do it at all. Bummer right?

·  Property culling – This is also not done by many tools currently available. The idea is that you can discard any points with a reflection intensity within .05 of 1 for instance. This would remove points that might be reflected by safety vests or light sources for instance.

Linking Scans into Revit

Once you’ve wrapped your head around what you want to see, it’s just a matter of getting it in. Revit is actually one of the most difficult programs to import point clouds into. 2013 has improved things quite a bit, but due to the order of the units conversion and instantiation into the model database you have to get it right or delete it. Fortunately, it is only the import you have to redo if you get it wrong – but that can still be frustrating.

Step 1 - Indexing

If you don’t have an indexed (PCG) file for a scan, you can change the file
type in the import scan dialog to look for non-indexed formats. Select one,
and you’ll get this popup to the right. You have no options, no settings, just
a yes button. Simple as simple can be (too simple probably). Culling has to
be handled before you export to the non-indexed scan file as well.

Step 2 - Importing

Importing the indexed file into Revit you are given a limited number of options all of which are pretty cryptic. Ideally, we would get some ability to preview the results of this mumbo jumbo, as well as the ability to change it later. But for now…

·  Center to Center – what this really means
is that the centroid of the individual scan
will be placed at the centroid of your model.
Why this is even an option has always
baffled me as I cannot think of a single
good reason to ever use this.

·  Origin to Origin – the origin (0,0,0) of the
scan file will line up with your internal
Revit origin. This is a good option if your
scan and you Revit model are set up on
the same local origin point (grid intersection).

·  By Shared Coordinates – If you have shared coordinated in your Revit model that match the coordinates of your scan data (state plane for instance) then this is your best option (maybe).

·  Origin to Last Placed – This is useful if you have multiple scan files as it uses the transform from the previous import to set the transform of the current file. So, as long as the individual files (by scan, by level, whatever) are registered to the same coordinate system, you can use this to get import #s 2-X to match import 1.