Helping You Connect the Dots to Succeed Faster
WGAN-TV: Now Playing
Next on WGAN-TV Live at 5
Free WGAN Map
Locations of Matterport Pro3 Camera Service Providers and see the number of Matterport Pro3s and/or BLK360s for each Matterport Pro.
View WGAN Map
Contact Info
Locations of Matterport Pro3 Camera Service Providers and see name, company, website, email and mobile phone for each Matterport Pro.
Join WGAN Sponsor
Get on the Map | A Service of We Get Around Network (not affiliated with Matterport)
One Order  |  One Quote  |  One Contact
Book Multiple GLOBAL Commercial Locations
  • ✔  As-Builts
  • ✔  Construction Progress
  • ✔  Facilities Management
Last 24 Hours: 713 Unique Visitors
9,032 WGAN Members in 148 Countries
Last 30 Days: 38,564 Page Views | 18,538 Unique Visitors | 37 New Members
We Get Around Network Forum
Quick Start | WGAN Forum
BLK360LargePro2VideoWebinar

Video: How to Scan Large Spaces with Matterport Pro2, Leica BLK360 & 360s13439

WGAN Forum
Founder &
WGAN-TV Podcast
Host
Atlanta, Georgia
DanSmigrod private msg quote post Address this user
Video: Matterport ShopTalk #10 Webinar: Scanning Large Spaces with Matterport Pro2 and Leica BLK360 | Video courtesy of Matterport YouTube Channel | 29 October 2020

Hi All,

From the Matterport YouTube Channel:

In this episode of Shop Talk, our host, Amir Frank is joined by Volkie Yelkovan, Product Marketing Manager. Amir gets things started by taking a very deep dive into the best practices of capturing large (and unruly) spaces with a Matterport Pro2, compatible 360 cameras, and the Leica BLK360.

Volkie then Updates us on the latest news about [Matterport] Capture for Android and the new Highlight Reel tool in [Matterport] Workshop. As always, we then had a great Q&A session with many questions about, you guessed it, tips, and suggestions for how to scan a large space.


Source: Matterport YouTube Channel

Your thoughts?

Best,

Dan
Post 1 IP   flag post
WGAN Forum
Founder &
WGAN-TV Podcast
Host
Atlanta, Georgia
DanSmigrod private msg quote post Address this user
Hi All,

Here is a transcript of the Matterport ShopTalk #10 Webinar (29 October 2020) above.

Best,

Dan

--

Amir Frank:
Hey, everybody, thanks for joining us in this 10th edition of Shop Talk, very excited to have you with us. Let's go ahead and just get started, my name is Amir, and today we're going to be talking about, let's see here, large spaces. So, hopefully, this is something of interest and you'll be able to get a lot of information out of it, so yeah, so we'll be going over defining a large space, best practices for scanning that large space. We'll even be talking with Volkie Yelkovan who is joining us again, about Capture on Android, and the Highlight Reel, so, very exciting stuff. Volkie, thanks for being here with us.

Volkie Yelkovan:
Thanks, it's great to be here, Amir, as always, I'm excited to be with our participants as well, and Amir, I'm sure you're going to mention it, today we are trying Zoom after the GoTo meeting, so hopefully, we'll see how it goes, whether it will be a better experience for us and as well as for our valuable participants today.

Amir Frank:
Yeah, absolutely. For anybody who this is not your first Shop Talk, you'll notice that we are using Zoom platform instead of GoToWebinar, and so hopefully, that will be a little bit better for anybody who's experiencing technical issues with GoToWebinar. After all that, if we have time, I left this for the end so we're sort of changing things up a little bit, if we have time for it because I do want to leave as much time for the Q&A, then I'll go into the customization tools that I prepared for this week and we can look at those. If we don't have time and we need to kind of skip over that part, then don't worry, we'll provide a link to that YouTube video and you'll be able to just watch it on your own, so no worries there. So, defining a large space, when we talk about large spaces it could be mean one of several different things, and each different large space has its own unique challenges.

Amir Frank:
So, is it a really, really big empty warehouse that we're talking about? Large manufacturing plants with a lot more 3D surfaces, it's not just four walls, a ceiling, and a floor. Schools, malls, large apartment complexes, skyrises that go up. So, all these different types of large spaces present their own kind of unique challenges, and so, like I said here, while they're all different in that each one is structurally slightly different and again, therefore provides its own challenges, they're all the same in that they're all going to be time-consuming to scan. So have that in mind always when you're approaching a large space, it's not just something that you can say, "Okay, we've got a couple hours left in the day, let's run through this and crank it out so we can move on to the next job." These are time-consuming, they can potentially take several days to complete a single structure, depending obviously, on how big we're talking here.

Amir Frank:
But definitely always keep that in mind, and give yourself that amount of time to go and scan those. This is something that I've put together, when you're approaching a large space, any space really, but I think more so with large spaces, it's very, very important that you use the right tool for the job. What I mean by that is, the job that you're doing and not necessarily the space that you're scanning. So, depending on your needs or your client's needs, the tool for that job may be different, regardless of the space itself. BLK, the Pro2, 360 cameras, even the iPhone, they can all do it, the question is the results that you're going to get, whether those are going to meet your needs. So, keep that in mind, always choose the right tool for the right job, and sometimes it's going to be a combination of those tools. Always consider multiple models, what you see in this graph here are distances, so this small number here, 100 feet, the black bar is actually 100 feet, that's dead accurate.

Amir Frank:
The red lines is the disparity when using the BLK, the yellow is the Pro2, the blue is a 360 camera, if you're looking at within 4% accuracy, and the green, teal, whatever color that is, is 8% accurate. So remember, the accuracy of the tools that you're using, the camera that you're using, the level of accuracy in its measurement does range with the BLK, it's within .1%, with the Pro2 it's within 1%, and with 360 cameras, it's between 4-8%. So I'm just showing you what that could potentially look like best-case scenario is the blue, and the worst-case at 8% is this green/teal color. There's no way of calculating iPhone because it's so dependent on the person using it, most people using iPhones are not using tripods and spinning the phone around a very, very steady pivot point. That's not really the intention with iPhone, so if you're turning around yourself or turning around the cameras, most appropriate, you're still looking at Cortex using that 360 pano out to generate your 3D data, and that's what you're basing measurements off of.

Amir Frank:
So any kind of stitching artifacts that are introduced by not being dead accurate with iPhone, they're going to translate into inaccuracies in your 3D model, and therefore measurements. So always keep these things in mind, and of course, using a good iPad or iPhone when connecting because large spaces means a lot of scan positions. So an iPhone or iPad with a good amount of RAM, a good amount of hard drive space so you can store all that data and not have a problem when scanning positions that may not directly connect with the previously scanned position, we'll get into that in a little bit here, the Capture app starts looking at everything on that floor, and this could be hundreds of scans in a large model. That's a tremendous amount of data, and we are limited to the amount of time spent in finding alignment, so the strength of your iPad or iPhone comes to play very much so when dealing with large spaces.

Amir Frank:
Just to go over this one more time, as the space gets larger, you can see here for example on the very end of the chart, 900 feet, you can see that the 360 cameras, there's going to be a good amount of disparity potential with the measurements. So again, keep that in mind, and if you are using 360s, that fine, but maybe you want to break it up into multiple models, that's certainly always an option. You can have a few 300-foot, 400-foot models, whatever the case may be, and link them together with Mattertags, so always an option to consider. Okay, so the problems that may come up when scanning a large space are no alignment, and that's a pretty straightforward issue, you get a message, it says it wasn't able to align, you just back it up and figure that out. Misalignment, certainly more complicated because there is no message, misalignment happens when the system assumed it did align properly, but didn't.

Amir Frank:
So this really requires you to pay very close attention to those little blue dots that appear in your minimap, and where they appear in relation to other scan positions so you can tell whether it's been misaligned or not. Merged floors, this is pretty much impossible to spot, so it's just really important that you take the measures to prevent this from happening in the first place. What merged floor means is it's a misalignment, but it's not a misalignment along the X and Y axes so that you can see it in your minimap. If you think about the minimap in Capture as you've been scanning, you know that it's just a 2D image, from the top-down we see a single plane, well, merged floors is a misalignment along the Z-axis. So it's a height misalignment, and this is going to happen if you're scanning taller structures with fire escape staircases, every single floor looks identical. So the system may have a hard time aligning between floor 19 and floor 18, it'll bump that scan position down to floor 18 instead of where you actually are on 19, for example, and all of a sudden you've got two floors that are merged together into one.

Amir Frank:
Best-case, you have two floors into one, I've seen five floors be combined into one, and too much data to process, this is really nothing that you can do to prevent. But if you upload a model with 1000 plus scan positions, it may not ever make it through the scanning process, or it does make it through the scanning process, but then navigating through that space becomes very, very challenging just because it's so monstrously big. So, again, that's something that you can't really predict, we always encourage you to try, give it a shot, upload it, see what happens, and if that doesn't work out, you can always use Capture to break up the model into smaller parts, and then upload those smaller smarts and again, link those together. So, possible reasons for no alignments, there is too little overlapping or matching data, basically, this means either you've gone too far from one location to another location and it just can't piece those together or it can't identify them. Remember, the camera doesn't have any kind of gyroscopes or anything like that to give it any indication of how far you're moving.

Amir Frank:
It knows where you've moved only after you've scanned, and it tries to stitch those pieces together, another possible issue with no alignment is infrared light. Too much infrared light coming in, and this does absolutely not have to be direct light, this can be ambient light that just bounces all around. If you've got very bright, white walls and huge picture windows, or just a lot of light coming into a room, that room may be very, very challenging to scan just because there's too much ambient infrared light bouncing around. So, just keep that in mind, is that it's not necessarily direct light, it could be reflected. Misalignment, so again, with misalignments, it doesn't give you a reason, there's no indication because Capture thought it did the right thing and that's where these come in very tricky to spot and identify. Most common is similar architecture, so for example, imagine you're in a long hotel hallway where you've got rooms on this side, rooms on that side, and as you're moving down, it's kind of the same. It's very redundant, so you move down a little bit and it could think that you haven't moved at all, or again, lack of data where it doesn't have very much to go off of.

Amir Frank:
Merged floors, again, this is a very specific type of misalignment, similar architecture as I was saying with the fire escape type of staircase where they're just absolutely identical from one floor to the next, and skipping too many stairs. So if you just jump up from one landing to the next without taking the time to do the stairs properly, you could run into merged floor issues. Too much data to process, I mean, it's not magic, it is just a processing engine, it's just a computer trying to figure all this stuff out, so yeah, there is again, nothing you can do there other than break it up into smaller portions. Possible solutions, so let's look at no alignment, and again, just as a reminder, this is when you can see that the system identified couldn't align, gives you the warning and so with too little overlapping data, what you do is you just have more scan positions. When I first started Matterport, it was said that five to eight feet is what's recommended, so I got a string, I measured it eight feet, I was tasked with scanning our office, which is about 14,000 feet.

Amir Frank:
So I got the string and I measured it eight feet and I said, "Okay, I'm going to maximize the amount of distance between each scan position so I can minimize the overall number of scans, and thereby make this whole process a little bit faster." What I found was, that's actually not the best way to go because it takes more time to align between those scan positions because they are further apart. So, by shortening the distance between the scan positions, making it five to six feet, you actually end up saving yourself time, the overall number of scans, sure, are going to be greater, but it takes less time to align, and therefore you can move on and scan the next position faster. So always keep that in mind. The problem with large spaces is not so much with the total number of scans in the model, it's with the total number of surfaces in the model. So just because you have more scan positions isn't going to necessarily make it a heavier, more difficult model to process.

Amir Frank:
So, more scan positions, if the problem with no alignment is caused by infrared light, and you should know that if you've got a bunch of light coming in, then you could scan at a different time of day, it being cloudy if you're scanning outside the structure and feel like, "Okay, it's a cloudy day, I can go ahead and do that." Not so much the case, clouds don't stop infrared light from passing through. They just disperse them and make it a little bit broader, so it's not direct, but they don't stop them, so they could still cause an issue even on cloudy days. Other ways to reduce light, blinds, try to make sure that, I don't know, well, blind, really, that's about it, I guess, I can't think of anything else right now, but in each situation, it could be different. Using the 360 cameras. 360 cameras, the Pro2 in 360 capture [360 Views] mode or the BLK have no probably with infrared light. So if you're indoor, not outside, outside it's a different issue altogether for why a 360 image may not be well-converted to 3D.

Amir Frank:
So if you're inside, you've got too much ambient light, Capture, you're dealing with the Pro2 and everything is great, but you're running into these issues where there's a lot of unfilled mesh, you can see it in the minimap, where it's all black and that in your situation, you notice that it's because of the ambient infrared light. Capture it as a 360, [360 View] place that 360 where it belongs in the minimap and then tap it and say convert to 3D, it's still something that's in beta. But it'll take advantage of Cortex to convert that 360 image, again, even if it was captured with the Pro2 you can do this, and regardless of whether there was light in that scene or not because you're using Cortex, it just looks at the 360 pano image to convert. So that could be a little trick that you use to kind of get by if you don't want to wait for the sun to go away. Misalignment, similar architecture issues, use AprilTags, if you're not familiar with AprilTags, we introduced this a couple years ago, it's basically a very large QR code that you just stick to the wall.

Amir Frank:
Very, very important, if you are using the AprilTags, in Capture in the settings, there is a setting that you'll have to enable, which is the assisted alignment. So, turn that on and that will enable the system to identify these AprilTags when aligning, so it'll use those as alignment keys when positioning the next scan position. Let's see here, reduce distance between the scans, as we said previously, again, with repetitive architecture in those hotel hallway-like scenarios, instead of having five to six feet where all of a sudden you find yourself in very similar architecture, maybe just move a little bit less than that so it can identify. Something that is interesting about Capture, and this should help you with scanning, when you scan a position, Capture will first try to align with the last scan position. So that's very, very important to keep that in mind when scanning similar architecture, and that's why keeping those scan positions closer together helps tremendously.

Amir Frank:
So, it'll align with the last scan position instead of some other scan position, if it can't align with the last scan position, that's when Capture will try to look at everything on the floor and it's just a ton more data to look at and try to align, and that's where you could run into trouble. So, try and force it to align with the last scan position by keeping that distance between the two positions short. Lack of data, we just talked about this, it just goes back to trying to increase the amount of data if it's ambient light and so on, I won't go into that again. Merged floors, basically, this is an issue of similar architecture, it's no different, it's just again, it's vertically instead of I guess horizontally. So, again, same thing, stairs, especially in fire escape type stairs when every floor is the same, only go three steps up, so move the camera three steps up, scan, three steps up, scan, and keep doing that. If you go from one landing, halfway up the stairs, onto the next landing, you can definitely introduce some misalignment there and cause a merged floor.

Amir Frank:
So, keep that in mind with stairs, ideally, it's three floors. Also, it helps with navigation, so where the camera is on the staircase, imagine if you draw just a perfectly horizontal line across that stair, you don't want the next scan position to be on that stair because it'll be very difficult to identify it as a scan position at all. You want it to be a little bit less, so make sure that the next step should be lower than the height of the stair that is the same height as the camera, does that make sense? So, three steps, maybe that's easier. Okay, too much data to process, linking together smaller models, we went over this a couple times already so if it does end up that it's just too much to process and the system doesn't work out, or if navigation just becomes very slow and you're not happy with it, break it up, smaller models, and just link those together. Prevention techniques, basically, when approaching a large space, it's always good to have a plan.

Amir Frank:
Have an idea of the floor plan for the space that you're scanning, I always like to draw this vision of if you've got a pencil down on the floor plan and you're drawing a line around, if you need to pick the pencil up and move it, you're essentially breaking the path of alignment. When doing that, and eventually you're going to have to do this, but when doing that that's when again we go back to Capture, looking at every scan position on that floor to find alignment. So make that as easy as possible for Capture to do by placing the camera after you break that path of alignment, over a scan position that's already been scanned. Don't go to the next one, scan again where it's already scanned so it has that maximum amount of overlapping data to alight with. Be incredibly meticulous, it is so important to be just incredibly thorough and meticulous when scanning large spaces, so much so than smaller spaces because you just have so many more opportunities for misalignment and issues. Five feet every scan position, be very consistent and again, stairs, absolutely critical three steps, scan again, three steps, scan again, and so on.

Amir Frank:
Consistency, try not to just jump around too much, I completely understand in some cases it's very difficult, especially when there is still maybe even construction being done on one floor that you need to scan next because that's the one, and so you just kind of pass it. Try your best to avoid those situations, not only does it make it easier for Capture to scan if you keep that path of alignment as long and continuous as possible, but also if you do run into trouble, it's going to make it so much easier for our support team to help you with fixing the problem. Unmerging floors, fixing misalignments, and anything like that, so having that consistency in how you scan the property is very, very important. Don't rush, again, very time-consuming, always keep that in mind, take your time, and just go through it. That's it for me, that's the nuts and bolts of scanning large spaces, so I'll bring Volkie back in and let him talk about what's going on with Android, Volkie?

Volkie Yelkovan:
Thanks, Amir, hey everyone, this is Volkie Yelkovan from the product marketing team, I also Elizabeth from our team today here helping with the great questions that we are getting during this session. So I'll talk to you about two topics, one is the Android, the other one is Highlight Reel that we are happy to share some news around. So there were a lot of questions and expectations around our support for Android, and I believe that our product manager Kirk joined us to talk about it very recently. So I'm here to build up on the news that we shared earlier, so Matterport Capture for Android is now available in the Google Play store unrestricted, meaning it's still in beta, but it's in an open beta, before it was in private beta and we were controlling aspects of it. Now it's open so you can come in, download the app, and as you can see on the left side of my screen, this is where you're going to come to and then install it.

Volkie Yelkovan:
There are some requirements around it, so obviously as you know, Android system is very diverse, there are many different types of cameras and versions, unlike iOS. So, therefore, we bring some requirements around this so that we know we work well, and so we have requirements for devices, for the OS, operating system would be 8.X Oreo, 9.X Pie, 10.X Q, and then 11.X. So those are the OSs we support, and we also like the devices to have 3GB of RAM or more to operate efficiently. Then Android devices certified by Google and unrooted, so if you have a device outside this, it may or may not work, but we recommend coming in, especially during beta, trying with this configuration. Now, which cameras do we support? By the way, smartphone capture as we have for the iOS is not yet available, so you will need to use Android to connect to a supported camera to drive that camera.

Volkie Yelkovan:
So which cameras are supported? The Pro2, the Pro2 Lite, Insta360 One X, we're also supporting the new One R, but they have beta firmware, so you have to get their beta firmware to operate with our Android Capture app, Ricoh Theta Z1 and Theta V. So these are the supported cameras, during the beta process we will keep on adding new devices and just stay tuned for news from us, we have a support page as well around Android, so please keep checking on that. So that's exciting, and then at some point, we will introduce the handheld smartphone capture ability so you will not need any camera, so you'll be able to just use your Android phone to capture and create a digital tour in the future. We will also make [inaudible 00:27:18] available with this, so we will come out of beta in the next say few months or so. So that's about Android, the other thing that we have released in an open beta is the Enhance Tour option.

Volkie Yelkovan:
So you can see on this GIF image is a short snippet of that capability, this is one of the top used features for our workshop, about 25% of customers use it. So, I'm glad to share this improvement which focuses around new controls, so there are new settings we have introduced including the transition speed for the tour, panning speed that can be adjusted now with the easy screens here on the left you can see. Also, rotation angle can be customized, and we also added a preview button so that you can see without it being live, shows your current highlight and then it also opens the edit so you can make more edits as needed. So, this is now in an open beta, so please go ahead and give it a try, there's also a support article we posted, so if you search for Highlight Reel, that support article will come up and you can also use that to help you with trial of this new functionality for our Highlight Reels. So go ahead, enjoy, give it a try and let us know how we are doing, and that's all I had, Amir.

Amir Frank:
Sounds good. Yeah, this new Highlight Reel feature is actually really cool, used it, really exciting. So, this basically here is the video, I was going to cover labels and measurements, but I think what I'd rather do is focus on the questions for the remaining half-hour, we do have a good amount of questions that are coming in so I'd like to address those and again, if we have time after we've addressed all those, then I'm happy to play through that video. It's about 10 minutes, like I said, the emails that'll go out after this webinar will include a link to that in YouTube so you can watch that on your own as well. Lastly, just wanted to go over support, so if you have any kind of questions, love this webinar because it allows you to ask these live questions and go about answering them. But if we are not able to get to them here, obviously if you go to matterport.com, hit resources, the first options at the top there is support, just click on that it'll get you to our basically support hub that has frequently asked questions answered there, link to our help center with hundreds of articles of information including how to scan a large space.

Amir Frank:
As well as contact information for our support team, which you can reach at support@matterport.com, always happy to help with your questions there. If you are in the US, your phone number to reach support is 408-805-3347, if you're not, go to that support page and you'll see the phone number that is correct for you from your location down there at the bottom, so just go to support, scroll down to the bottom to find a phone number that's right for you. I just always like to remind that with communication, it's always important to make sure that your contact information, that email that you have in your account is up to date. If we need to send out an email relevant to updates with your account or whatever it might be, you're not going to get those emails if that's not up to date, so just go in and confirm that. Finally, stay connected with us on Facebook, if you go to Facebook.com/matterport that is our official corporate Facebook page.

Amir Frank:
We're also always talking about what's the latest and greatest that's happening with Matterport, if it's releases like the Highlight Reel that Volkie mentioned or anything else, you'll find it posted there, if you are interested. We also have a Facebook group called MOUG, it's M-O-U-G for Matterport Official User Group, great place to communicate with fellow Matterporters and technicians out there scanning and using the products. So if you're looking for tips and tricks from people in the field, great resource, and in that resource we've seen a lot of really amazing models be presented. So if you want, you can go to go.matterport.com/nominate-your-space to nominate your amazing spaces, so if you have been posting or if you were impressed by a post that somebody put on a space that they had scanned, let them know about that. It would be really cool to have that in our gallery and Destination Everywhere website, and so with that, let's go to Q&A.

Amir Frank:
So I did see a question come in that I wanted to touch on, and Volkie, by all means, if you see a question that you want to address, just chime right in and take it away. So this one comes in from James, he says, "I was told we were limited to 250 scans, has that changed? I just scanned a 90,000 square foot space and was breaking it up into multiple scans, can I merge them?" So, a couple things, the 250 number is kind of interesting, maybe that's gone up since I last saw. But it's not so much a hard limit, we say 200 or 250 spaces because it's a rough estimate of what will produce a model that'll maintain a really good level of navigation experience for your visitors and that's very important. The reality of it, James, it's not so much about the number of scans, it's about the number of surfaces, we are limited to how many polygons end up being used in a model.

Amir Frank:
So, we reduce those polygons to make it more efficient for navigation online, otherwise, imagine trying to download a gigabit worth of information or more even when you're online, most connections are not going to be able to manage that very well. So, 3D models do need to be reduced, and so when you have a tremendous amount of surface data in your space, that can just be more complicated, so what I mean by that, imagine an empty room, you're scanning a big warehouse, 90,000 square foot warehouse, but it's just an empty room. It's just a completely empty warehouse, shouldn't be a problem, that's pretty easy, flat surfaces, we can cover that. You start filling that with racks and racks of material and stuff, every surface, now, if it's a curved surface, it even adds more complexity, everything in that space gets scanned and turned into part of your 3D model.

Amir Frank:
So, now your 3D mesh becomes tremendously more complicated and a lot more difficult to manage and so that's kind of what's behind it, it gets way too technical to put it in a simple form. You're not going to know how much surface area, you can't calculate that, so that's why we just say, look, 200, 250, that's roughly the limit. But again, just go back to the amount of surface area, after a while you'll kind of get a better understanding if you're doing these larger spaces more and more, you'll get a better understanding of what works. But yeah, that's about it, you will also notice in Capture, there's a little indication up at the top center where it shows you the number of scans that you have in that model. I think the circle turns yellow or something like that when you've reached the threshold.

Amir Frank:
Again, it's not a hard limit, nothing's going to stop you, you just might run into problems when processing, you might run into problems when aligning, so on and so forth, so it's just a suggestion. Another question, so Jeffrey asked how to prevent merged floors from happening, we did go over this, Jefferey, but just to reiterate if somebody may have missed it, supercritical when dealing, I mean, I absolutely cannot stress this enough, very, very important when dealing with floors and every floor is the same type of architecture in that staircase, do every three steps. Every time I've done this, it's never failed for me, every three steps, it takes a little bit longer, I get that, but it is the way to prevent merged floors.

Volkie Yelkovan:
There's a question I can take, Amir, while you-

Amir Frank:
Yeah, by all means.

Volkie Yelkovan:
Yeah, so the question from [Abhijit PP 00:36:43] is, "Do you have any plans to support offline viewing on platforms other than iOS?" Great question, we don't have any immediate plans, Android will likely be the next platforms, but we don't have a date planned, so stay tuned, please.

Amir Frank:
Good question, actually, Abhijit also asked, "Will assisted alignment," this refers to the AprilTags that I was mentioning before, " work with 360 cameras?" And the answer is yes, absolutely, as long as you go into settings in Capture and enable assisted alignment, it doesn't matter what capture device you'll use, they'll be able to take advantage of those AprilTags to assist with alignment, I guess, yeah. Tim asked, "Is there a limit to the scans the app can handle?" This kind of actually goes back to using a good iPad and iPhone that I was talking about, a more powerful, stronger iPad will have an easier time to find alignment than an older device.

Amir Frank:
So again, something that if you are scanning large spaces, especially the wide ones that have a lot of scan positions on a single floor because when you do end up breaking that path of alignment, again, it's looking at all that massive amount of data, a strong CPU and GPU in your mobile device will be able to process more data faster and will be able to possibly reach that point of saying, "Okay, these are aligned, this is enough data for me to say these two positions line up together." Whereas an older device may not get to seeing so much, may not be able to calculate as much data as fast and time runs out and that's it, no alignment, misalignment, whatever. So, having a strong iPad and iPhone definitely comes in handy, oh, here's a good question from David, "I would love to know which is the best path to scan in a large space, I always thought it was staying close to the walls."

Amir Frank:
Yeah, so if you're talking about, again, the scenario of it being a large empty warehouse, you want to try and initially start by crossing the warehouse down the middle because you don't have those walls close by to use as alignment data. So take advantage of the walls, you're absolutely right, you don't want to be too close to the wall, you want to be maybe 10 to 15 feet away from the wall so the camera can see it and so that you can move up five to eight feet. As you move, it can more easily identify the previous scan position with the current scan position and piece those together with that alignment data on the wall, it certainly helps. Yeah, so you do the outside of the warehouse first and then you can either provide a couple 360 images in the middle if you want to give those points of view. If you don't need that to be part of the Dollhouse and you're really just measuring from one side to the other, maybe all you need is a perimeter scan.

Amir Frank:
You may not need to fill in the entire middle portion, but if you do, once you've scanned the whole outside, it should make it easier to scan through the middle. Depending on the floor, sometimes the floor has a lot of dents and bruises and markings that can be used in the system to help identify with alignment, so sometimes lowering the camera down a little bit closer to the floor will help assist with that in my experience so that you can cross an empty warehouse. Again, as long as you maintain that five feet distance between scan positions and you have nothing other than floor basically, super tall ceiling, walls are too far away for the camera to see it and all you have is floor. Assuming you are using the Pro2, maybe lowering the camera a little bit will help it assuming, again, the floor is not just completely spotless.

Amir Frank:
Then of course, again, this goes back to using the right tool for the right job, this isn't possible for everybody, but something like the BLK that can see for incredible distances would help with that as well.

Volkie Yelkovan:
Amir, just to give you a break, and there's a question around a topic that we haven't covered today, but it's always relevant is, "Will the blur tool be handy for preuploading to Google Street View, and those be reflected on Google platforms?" So the answer to that is, the blurs will be reflected in Google Street View as long as the blur, it happens before uploading to Google. After uploading to Google you can't blur, but you can do it before uploading, so that's the answer to that.

Amir Frank:
Yeah, and I'm assuming it would be possible, you can remove the panos that you've previously uploaded to Google, you can say, "Remove from Google." Using the Google Street View upload tool.

Volkie Yelkovan:
Right, and reupload.

Amir Frank:
Do the blurring, which affects your pano in Matterport, and then reupload, is that right?

Volkie Yelkovan:
Yeah, probably that's doable, that's another workaround if that's needed, yeah.

Amir Frank:
Question here, this is actually a really good question regarding how to plan for scanning a large space. So, [Alore 00:42:10] asks, "How do you recommend to plan for a scan path for a very large facility, 100,000 square feet containing many objects?" So assuming we are containing many objects, you've got those objects to serve as great alignment data. Basically, the rule of thumb is in the back of your head, keeping the path of alignment as long and continuous as possible. Always try to have your scan position easily be aligned with the previous scan position. So when you pick up your camera and move away to continue from another area, I'm not saying it's going to cause a problem, but it certainly increases the processing of how much the Capture app is working and looking at, to base that alignment and so on and so forth. So, just basically plan it out in a way that minimized the amount of picking up the camera and moving to another location.

Amir Frank:
I know in some areas you're dealing with tall apartment buildings, 16 to 30 floors and even higher sometimes, they have in the code, you have to, as a builder, you have to provide two staircases just to make sense so more people can get out faster. So that's part of the code of building, so now you're dealing with scanning a floor, going up one of the staircases, but you want the other staircase to be part of your model, so you have to break that path of alignment as I just mentioned, kind of go back and scan the other staircase. So how do you do that? Very, very tricky and complicated structure to scan, but again, you do your best to minimize the number of breaks in the path of alignment. So you start on one floor, you go up the staircase, scan that entire floor, and maybe work down the other staircase back to the original floor.

Amir Frank:
Now you should have enough data, and as long as you use MatterTags, maybe that will help assist when you go back and break that path of alignment, make sure that you're scanning over a scanned position that was previously captured with one or more, maybe two AprilTags in that area at camera height so it can easily identify them. Just basically do everything you can to make it as ideal as possible for Capture to find that alignment, but that's about it, so once you make it back there, then you scan up the staircase and keep going, so hopefully, that helps. Cathy asks, "Is there a way to link 3D scans with drone videos?" So this is a really good question, at this time, video is best linked, I think, basically, as an embed in a MatterTag.

Amir Frank:
I can't really think of a better way right now off the top of my head that you would be able to link video, I mean, basically, drone video really shouldn't be any different than any other video, Cathy, so ideally, you would have that in something like Vimeo or YouTube, you can even have it unlisted so that it's not a video that can just be randomly found or searched for but can still be linked into your MatterTag and embedded in the MatterTag, and I think that's probably the best way, to have the Matter Tag in the model and when somebody rolls over it, they can just view the footage. Don asks, "What amount of overlap is required for a BLK scan to succeed?" Great question, the BLK for anyone who doesn't know and are not familiar with it, is a laser scanner, it just goes for a very, very long way.

Amir Frank:
Where the Pro2 is limited to about 12 to 15 feet in how far it can see in 3D, the Leica BLK and its laser scanning abilities can see for 50 meters, I mean, that's 100 feet, easy, even more. But that doesn't necessarily mean that you want to go 150 feet between scan positions, going 50 feet between scan positions can potentially introduce some lack of data in my experience. I've tried this outdoors in parking structures where I've taken the BLK and I've scanned one position and then scanned down about 50, 60 feet, it actually aligned just fine, which was remarkable, but I noticed that in the final product, what happened was, the spot just below the camera where the camera itself can't see was not filled in my the other scan positions. So there was a hole, at every scan position there was a hole in the 3D model, so you don't want to push it, the BLK is certainly not designed for smaller, more tight spaces.

Amir Frank:
So if you are doing a large facility and it's a school, for example, you're going through these hallways and smaller classrooms, it's a big space, but it's kind of built out of a lot of small spaces, so the BLK is probably not an ideal tool for that job. But yeah, so it's hard to give you an exact measurement, Don, I'm sorry, that you want to stay within for scanning with the BLK, again, with my experience, it'll align very well in very large, open spaces, not so well in smaller, confined spaces, that again, add up to a big space, hopefully, that makes sense. Rob asks, "Are the best practices when scanning large spaces with tall ceilings, if I scan a floor at multiple tripod heights, will I run into vertical alignment errors? I've had horrible looking Dollhouses and Dollhouse Views in the rooms with multiple-story ceilings, and I'm looking for best practice."

Amir Frank:
So, Rob, one thing you can do, so scan most of it with the Pro2 which sounds like you're doing, and then maybe throw in a BLK or even a 360 camera because the 360 camera will see the tall ceiling, it may be able to translate that into 3D data and help where the Pro2 is not using its 2D. I mean, if you don't have a 360, you can also just try with a Pro2, scan as a 360 capture, and then convert that and see how that works, it'll be actually a really interesting test to see what it was able to capture those tall ceilings as 2D data or will it translate that into 3D? The BLK would definitely be able to see that and introduce that as 3D data, be interesting to see if the Pro2 captured as a 360 then converted to 3D would work. As far as raising the camera up on a tripod, a very tall tripod or light stand, I've seen some go up 20 plus feet if you can keep it steady, and those are hard to find and can become kind of expensive, obviously, not nearly as expensive as a BLK.

Amir Frank:
But that should work, there should be no reason why it can't align going up, I've definitely seen it done and it'll definitely help with filling in that information in 3D so you can obviously hide those scan positions later, but you can definitely use a tripod or light stand to increase the height up 20 feet to help. Make sure it doesn't sway back and forth because you are dealing with a camera that's moving, if it's not super well balanced you may introduce a little bit of wobble, which may have an effect on alignment. But again, since you're hiding that and the 2D information isn't really relevant, that should work, I've definitely seen it done, it just requires a very, very steady tripod or light stand.

Volkie Yelkovan:
Hey, Amir, a few folks asked about, they came in late and for other reasons, they're wondering if these will be available, so these are recorded and then presented on our website under resources in the main navigation bar, if you click on resources, under resources you will see events and webinars. Under events and webinars, you can see the on-demand webinar section if you scroll down, that's where we present all the past recordings for the Shop Talk, so that's for your information.

Amir Frank:
Yeah, definitely, and as I said before for anybody coming in late, a link to this webinar as well as the video I intended on presenting in this webinar about the customization tools in Workshop will be sent to you. So everybody attended registered will get an email with those links that we'll have available probably by tomorrow. So I think that's it, we are at the top of the hour and I'm sorry that we weren't able to get to everybody's questions, a lot of really, really great questions, I do appreciate it, I see this subject is pretty popular. So we'll run through this again, and hopefully, we'll be able to get a lot more material out there that'll help you with scanning these large space because they're great and super important to scan as well. So with that said, super appreciative, thank you so much for all you guys who attended, who asked these great questions for us to answer, Volkie, huge thank you, Elizabeth, thank you guys so much for helping out with the questions, and Volkie, for participating and introducing that information, and that's it for us.

Amir Frank:
We're at the end of the hour here, so again, thank you so much, hope to see you again, we'll have another Shop Talk, session 11 in another two weeks, and hope to see you then. All right, everybody, take care and thanks again, bye-bye.
Post 2 IP   flag post
101436 2 2
This topic is archived. Start new topic?