Video: Matterport ShopTalk 29: Matterport Android Smartphone Capture Beta | Video courtesy of Matterport YouTube Channel | Aired Live: Wednesday, 3 November 2021 | with Matterport's Amir Frank and Kirk Stromberg

--

Transcript (Video above)

[00:00:02]
Amir Frank: [00:00:00] Welcome webinar listeners. Thanks so much for joining us today. We are here with our very favorite and special guest, Kirk Stromberg, Senior Product Manager at Matterport here to talk all about Android and a couple other fun things that we have in store for you. Very excited to have you here with us, Kirk, once again.

[00:00:28]
Kirk Stromberg: Thank you Amir. Nice to be back.

[00:00:29]
Amir Frank: All right. [00:00:30] We've got people coming in and we want to give you a little bit of time. But with that said, we also have a lot to cover and want to get going. Just make sure your audio is all set up. We'll get to this in the slides. I'll go ahead and start sharing my screen.

Shop Talk 29. Moving on. As we said, my name is Amir, and Kirk is with us, and as far as a couple things that you need to know is basically [00:01:00] just questions and answers we'll cover mostly at the end. But please do submit your questions. The Q&A panel that you see at the bottom there, you should be able to see the Q&A little button there.

Tap on that, up comes a Q&A panel, and type in your questions for Kirk and myself there. Charmaine is always here to help us out with those as well, anything throughout the webinar itself, and then anything [00:01:30] that is for the Q&A session at the end, we'll just address that at that time.

As far as the chat is concerned, yes, this is Zoom, so you have that chat functionality if you're running into issues with your audio or anything like that, please submit it in the chat and Charmaine will let us know and we can take care of it.

Unfortunately, when I share my screen, I don't see any of that stuff. Moving on, [00:02:00] and of course, we are recording this and we'll have that available on-demand for anybody interested to come back and revisit. Today's agenda, we'll be talking about Matterport for Android.

Yes, Android connected has been out for a while, but as you may have heard now, you can actually use your Android phone to capture, using the camera on your Android device to actually capture that Matterport model. We had that with iPhone [00:02:30] for a while and so now it's available with Android, new, very exciting.

A couple of the things that are in Beta is the Import 360, also very exciting, looking forward to showing you that, as well as realigned 3D scans which is also really exciting to be honest because these are things that are huge. This has been asked for for a very long time and something that I think will really, really help in those two.

Now Android only. If you've got an Android device and [00:03:00] you're listening in and you're excited about this new Matterport for Android thing, these two are specially for you. All right. Kirk, let's talk Android.

[00:03:10]
Kirk Stromberg: All right. Well, thanks everybody. It's nice to be back. As Amir was saying, right now, the key thing that we're bringing to the market and is in available now in production in the App Store is Matterport for Android being able to capture with the camera that's in your Android device.

I think everybody's familiar with the ability to do this on iOS devices, [00:03:30] and now given the diversity and breadth of Android devices, we've been able to bring that as well to the other side of the house. Now, basically, with iOS and Android, you can use your mobile device to capture a space.

The experience and how you do this on Android is basically identical to iOS and we're going to walk through that in a few minutes. But in general, our goal here is to give you more tools and you use the right tool for the right job. [00:04:00] We want to make sure that there's essentially very little friction between you and capturing the space and doing the job that you need to do.

This is an example where now you've got more tools for your teammates, for colleagues, for workers, and general folks that might be hesitant to try taking a 3D scan or getting a tour of their space. This is trying, again, to lower the barrier to adoption, to get more people aware of the benefits of making a digital twin, offering [00:04:30] digital tours, 3D tours. But basically, potentially give on ramps to getting you more business as well.

There's different scenarios here in terms of you might have staff or co-workers that it's unreasonable to to outfit them with 360 camera or with Pro2 cameras or specialized gear, and this is a way that they can use the phone and devices that they've already got to be able to capture spaces, and so there's a really nice opportunistic angle to this, is that since everybody pretty much has their phone with them all [00:05:00] the time, you always got the option to scan.

You don't have to do with tons of setup and prep. Now, obviously there's a lot of instances where you need to do very careful prep and setup and so forth and you need to bring in additional gear approach to a 360 camera, the BLK 360.

Again, the whole idea here is right tool for the right job, expand everything in your tool chest, and keep on going and trying to optimize each of these. Let's [00:05:30] go onto the next slide in here and we'll talk to some of the details around here. I've seen a couple of questions come in that I'll start to answer as well a little bit.

We're going to try to handle most of the questions to the end, but I'll pick a couple of them off. Just mechanics for how to try this out. As Amir had noted, the capture for Android application has been in the Play Store on Android since the spring, and that initial release was for connected cameras, so 360 cameras, the Pro2 cameras.

[00:06:00] There new release that we have in the App Store now enables capture with the camera that's inside your mobile device, your Android device, and you no longer need to go through the Beta channel in the Play Store to try this out. You take the production app and go ahead and start scanning with it. There's no special steps that are required to do that.
That said, we love having Beta testers. If you're up to try the latest before the general public gets it, just going to Play Store, find the Matterport Capture page, scroll down to the bottom [00:06:30] and become a Beta tester, and you get the early releases for our new features and things like we're talking about today that have not been brought out to the production app.

A couple of wrinkles with a couple of details with the Android capture version. You need to have Android 8 or above to run Matterport Capture with connected devices. To be able to do scans with a phone in your Android device, you need to run Android 9 and above.

The reason for that is that different versions of Android have different [00:07:00] camera capabilities and camera APIs and we're trying to work our way backwards in terms of compatibility, but right now we've had to hold this at Android 9 and we'll be looking to see if we can bring this back to 8.

It requires some additional work and some changes on our side to be able to reliably do that with each of these devices. Another key thing is in general, the rule of thumb on Android is the exact same as the rule of thumb on iOS. The more RAM, the better. The more modern the device, the better.

Capture and doing [00:07:30] the 3D conversion and the AI stuff that we do in our app, it's a performance intensive app. You always need storage to store the data for large spaces, but the amount of working RAM that you have is really key, and so in general, the larger the higher-end devices, the more modern devices tend to have more and what ends up happening for you then is you get better performance, you took that typically faster alignment times, and in general, you can do larger jobs with a single device.

[00:08:00] Experience wise, for Matterport, for Android with capturing with your smart phone, it's the same experience. You've got two different modes, Simple Scan and Complete Scan. Simple Scan, I'm going to do one rotation with the phone, and I'm basically capturing the ultra-wide angle view as I go through and I'm capturing a decent amount of the field of view of the space.

Complete scan is doing two rotations. Basically, I'll do one rotation, a second rotation, those overlap and so we get a much wider [00:08:30] field of view. If you're capturing a space where you have large ceilings, you really want to get a lot of detail up above. You may be choosing to go with the complete scan because you can get a lot more captured in the space there.

Now in general, in any space, you want to do multiple scans to get different angles and multiple coverage, but this is an option for you. One thing to note is that a lot of Android devices, even the low-end devices, have ultra-wide cameras. [00:09:00] That's awesome because that makes it faster and easier for you to capture space with your smartphone.

Now, the wrinkle on Android is that not all of the manufacturers, the OEM's, Original Equipment Manufacturers, give third-parties access to those cameras. For instance, on the Google pixel devices, we can get access to the ultra-wide camera.

On most of the high end Samsung devices, we can get access to the ultra-wide. Some manufacturers like Oppo, I-Plus, don't let third-parties get access to that ultra-wide camera. Smartphone [00:09:30] capture here or Matterport for Android is going to operate the same way as if you were, say, like on an iPhone X. iPhone X doesn't have an ultra wide.

In those older devices, you can still use it. You may find that doing a complete scan is preferable because you can capture more of the space. The main camera has a narrow field of view, so as you rotate around, you're just getting a narrower window, and so the end result in your space, in your model, is that above and below the poles, [00:10:00] you're going to see a larger blurry section.

To get rid of that, you take the complete scan and go around twice. That's just a couple of things. I don't know if everybody is aware, but Schematic Floor Plans orders have now been enabled for 360 cameras and smart phone captures. Previously, these were not available, but these are now available to order.

You can capture with a 360 camera, you can capture with your smartphone, and order a Schematic Floor Plan from our partner providers and get [00:10:30] that. Now, I know a lot of folks have been doing that with other third parties already, so I just want to make sure that you knew that within the Matterport system, within a proper space, you can go ahead and order that as well.

Common questions that have come across have been like, do any Android devices have LIDAR gets? For the ones that have some depth sensors on the back, are we using them for smartphone capture? At the moment, we're not. We've experimented with them, and a lot of times they're used for autofocus and you can do some crude scanning with them for objects.

But in general, [00:11:00] the density, the frame rate, and the accuracy of them have not been super useful for smartphone capture. Now, in the future, will we see an Android device with a LIDAR sensor on it just like we see with the iPhone 12 Pro or the 13 Pro? I would guess so. It's just a matter of time.
With that I think let's move on and Amir has got some cool tips and tricks for how to capture with your phone.

[00:11:28]
Amir Frank: Before we move on, I just noticed [00:11:30] that is Matterport for Android Beta and a lot of Beta stuff going on here. I remember when I find, when we did this, people were very concerned. Some people are super into Beta and really want to be a part of it.

Others stay away from it as much as possible and I don't want anybody to think like the entire Capture application is in Beta. We're not the Capture app that you're using with [00:12:00] your Pro2, with your 360 camera is not what's in Beta, it's only that ability, that feature that allows you to use the camera that's in your Android, that the Android's camera itself that's the part that's in Beta. If you're not using that feature, you're not using anything that's Beta at this time.

[00:12:18]
Kirk Stromberg: Yeah, that's a great call out, Amir. Thank you. Good call.

[00:12:23]
Amir Frank: Basically short little clip on how I go about doing this with [00:12:30] Android, with iPhone. It's basically with the smartphone Capture. How you want to hold your phone to really get to optimize the output.

Most often it's very common to do this the same way you would take a picture of a panorama or something like that, where it's a vista point of this faraway landscape. In this case, because all the surfaces are relatively speaking much closer to us, we can't do that, we can't have your body be the axis.
What you need to do is have the phone be the axis. [00:13:00] To do that, again, just have your elbows tucked in close to your side to keep a firm grip of the phone. I'd like to lean back a little bit and I have my foot sit out a little bit in front of me and the phone is right over my toe. What I do is just tap the button right here to get the scan going, I'll tilt it so that the circle is aligned with the dots, it'll scan the first position and then I just rotate to the left.

What I'm doing is just pivoting around my big toe [00:13:30] or the ball of my foot and there you have that and just keep going like this until I've done the complete 360. That's the end of the video. Basically that's the key.

When you're dealing with whether it's Android or iPhone, you're using your smartphone itself to capture a 360 again, just because those surfaces are relatively speaking, pretty close to us in comparison to [00:14:00] a landscape you want to pivot yourself around the phone itself and not your body.

It's supernatural to do this and you don't want to do that. If you want to check out the full video just go to Matterport Academy in our website matterport.com and see the whole thing.

[00:14:19]
Kirk Stromberg: Amir, before you move on, there's just one or two questions I caught that are tied to smartphone capture. I think maybe we'll break protocol to answer a couple of real fast.

[00:14:26]
Amir Frank: Yeah.

[00:14:27]
Kirk Stromberg: Scanning with your phone, [00:14:30] you want to use the same approach that you're dealing with the 360 or approach your camera. So in terms of thinking about how the viewer would news through the room or the space you're doing multiple scans in a particular area, take two or three steps between scans, you need like sight, between can and can't go around the corner and take a scan where nothing overlaps between those two because our algorithms one have anything to anchor to.

In general, the approach is exactly the same, you want to follow [00:15:00] the rules that Amir has illustrated here in terms of just minimizing the issues that you might end up getting with rotating around the axis of its own. Folks, in terms of asking about how do I figure out what version of Android they've got so there's two options. One is you just go to the Play Store and if you can see Matterport Capture as an app that you can download you're at least eight or above but otherwise you go into System Settings and typically, depending on the manufacturer, you'll find [00:15:30] system software version and something like that and it should typically tell you which version of this.

[00:15:38]
Amir Frank: Yeah, sorry, great call out. I didn't mention that what we saw here is like a single to optimize a single scan position. Of course, when you're flushing out a complete 3D model, like you said Kirk, same rules apply as you would scan with any other device. You want to move maybe four or five feet and then go on from there. Line-of-sight, all those tricks. [00:16:00] Let's talk Import 360. This is super exciting. I love this.

[00:16:05]
Kirk Stromberg: Yeah, so like Amir said, we've got three features here that the features themselves are in Beta needing to disability we're still testing, we're evaluating, getting feedback from you guys in terms of what you like, what you don't like, what's working.

App script itself, trap itself isn't production and not in Beta. But these features, which feels see the Beta tag, when you use them, they're just a heads up to say they've graduated from the Beta channel, [00:16:30] which is bleeding edge and it's not.

God knows what's going to happen, but, it's still an active Bellman's, now, if you so choose, you can also use them and they're more stable and then a little bit more polished, they're still not completely primetime, but we're doing this migration here because on Android we actually have a little bit more flexibility to do these kinds of things and again, we want to make sure we're getting feedback from you in terms of what you like and what you don't like.

In a nutshell, Import 360 images is really simple. You may have [00:17:00] contents, photosphere, 360 images from cameras that are compatible with Matterport system like my Theta Z1 or you may have incompatible camera that for whatever reason we haven't been able to work with me that they're closed like a GoPro or they're just don't support the open spherical camera protocol that we use to talk to these cameras.

You might also be doing different captures with maybe a DSLR, but it can get on together and you're working on your own peril tours [00:17:30] and so the notion here is that basically you can now import those as either 3D scans or 360 views, just as if you were working with a supported connected camera into creating a new job or an existing project, but the thing is this opens up some more flexibility for you.

For instance, if you needed to tune up an image or tweak something in it before you wanted into your metaphors space, you can do so if you wanted to leverage content that you've already thought [00:18:00] that's coming from some other area, you can pull that in. We've had some folks doing some interesting experiments where they're taking photos fields in 360 imagery from virtual spaces that don't physically exist and bring them in.

You can also do a lot of folks here are familiar with how you do virtual staging with our system. One thing that you can do here is you can virtually stage objects in your 360, but because they're being converted to 3D by cortex or artificial intelligence [00:18:30] engine, those virtual objects now actually also will exist in 3D in that space, so you can attach matter tags to them and if the 3D meshes important to you they'll exists in the 3D model.

Whereas today, pretty commonly the 3D virtual staging environment is just a visual, like you see the visuals in there, which is great but you don't have actual objects to tag to. You can do these in parts one at a time. Like you say, I just really want to pull [00:19:00] this one thing into this project's say, you're doing an apartment complex and you want to pull a 360 view of the pool into every single apartment model, previous to this you had to go shoot that each time, now you can just get one and pull it in to each of those projects and it's no problem.

You can also do this in batch. You can basically say, I've got 10, 15, 20 of these 360s and start a process. We'll show you in a second real [00:19:30] quickly how that would work. In fact, let's just go, just go right there now. This is just an example we'll do a static example and then this animation showing how this really works.

But in general you're on your main Capture screen, you're inside of projects and you tap the source selector right now the default is going to the Android camera, but I tap that selector and now you'll see three options. You've got the Android camera itself, your classic connected camera, and then you can also do Import 360 images.

The next choice as well, I'm going to do this as [00:20:00] a 3D scan or am I going to do it as just a 360 view and for folks that are not familiar with the 360 view, that's basically the imagery of a 360, but it's not generating 3D data and so this gives you some flexibility. For instance, if you want something as part of your tour that is really far from the main site, you can place that 360 far away and use that as part of the tour in that little portal that jumps you back and forth between that site.

Identical often I select my imagery so in this particular screenshot, I've got imagery that's [00:20:30] up in my Google Drive folder and I'm selecting those things and I then hit Select and if we go to the next slide. What Capture does, is it basically, if your imagery is already resident on your device, then this part is faster. If the imagery is up in Dropbox or Microsoft OneDrive or Google Drive or maybe Cloud provider then it's got a download that stuff to the device.

Then you will basically see this progress bar up at the top just telling you what the image is working on, having [00:21:00] more has to go and you'll also notice that there's the classic pause button down at the bottom. As you're importing things, basically you sit back and watch or you can actually fire it and forget it for your device down and go grab a cup of coffee.

But if you see something go wrong or says you want to mark a window, a mere making adjustments, you can pause the import process, make those adjustments, and then continue and pick that up. This dovetails really nicely with the real line through these scans that we'll talk about next, [00:21:30] just because when you're importing here, sometimes you can get misalignments and it's nice to catch them early on and try to fix them.

The one thing that I don't think I've highlighted to, is that for this import process, if you're doing a large number of imports, you want the sequencing and the naming of the files to be in the way that you are scanning.

So basically, instead of just a big old jumble of files that we can try to figure out, its, you have a much higher chance of success as if you import [00:22:00] them in the sequence that you are scanning them and the support page length that's up on our web page so it's been a matterport.com/beta you can follow the path to get to the Import 360 support page.

They'll go through a little bit more detail of how to improve your chances of success. Ideally, we'd love to just be able to throw a big old jumble of imagery in here and have the AI figure it out, but we've got to walk before you can run and so right now the best chances of success is to do it in a little bit more than ordered fashion. [00:22:30] You think that [inaudible 00:22:33].

[00:22:33]
Amir Frank: - As you would scan.

[00:22:34]
Kirk Stromberg: - Yeah. It's just as if you would scan. Basically, it's all the same rules apply just as if you're scanning. If you're capturing with a camera or a system that is not a supportive camera that we've got, think about that in terms of, how would I scan normally with a Matterport camera? Then do the same thing to prepare your content. Again, all the same rules apply. How does the viewer work through the space? What kind of pass do I want [00:23:00] here? Do I have coverage of it? All that stuff.

[00:23:03]
Amir Frank: - Do want to cover the left side in an old player or [inaudible 00:23:07] ?

[00:23:07]
Kirk Stromberg: - Got it. That sounds good. Disconnect the tails together with what we were talking about before. Realigns scans. You're scanning and all of a sudden you realize that, we'll make this up, scan number eight has landed in the wrong spot.

Matterport Capture told you that it aligned, but you are on-site, you know where it's supposed to [00:23:30] be, and you can see that we put on the wrong spot. This is what we call a misalignment. The algorithm thinks it succeeded, but you can see that it was off. Here we now are introducing the option to be able to say, I'm going to select that problematic scan. I'm going to say, I need to realign this, I'm going to try and put it in the right spot.

This allows you to move, rotate, and slide it over to the position that it should be in, and we try to give you some hints as to how that works. Then, when [00:24:00] you tell us to realign, we're going to use that constrained position to say, instead of just trying to figure out where the scan should be on the entire floor, we're going to use the hints that you gave us to try to realign it in that spot.

It's not always successful, sometimes there are challenging environments that make that a little bit more hit or miss. But in general, this should give us more of a constraint. You should get a higher success rate in terms of getting that fixed. It has a really cool little sample.

[00:24:30]
Amir Frank: [00:24:30] - Just before I get into the clip, just wanted to mention the importance of using that pause button if you do notice a misalignment.

Again, this just is very much when you're scanning with any other camera. If you don't pick up on that misalignment and other scan positions align with that last one and so on and so forth creating this domino effect of a whole section of the building that's misaligned, you can only realign one scan at a time, not entire sections of buildings.

[00:25:00] So when you go back, you have to realign one position and then the next and so on. It's a lot more work, so just make sure you note that, I guess. Basically, this one does not have audio. What you see here is just like we saw on the last slides. Select "3D Scans". I have these four already in my camera roll, so I just select those.

I took them sequentially one after the next. That's [00:25:30] it. Hit "Import" and it goes in and aligns them. You can see it just building out. This is sped up a little bit so you don't have to wait there. Then what I do is I zoom in, I select one of those, and just like you did today, I can see the preview of it and whatnot. I can move it to a different floor or whatever and I can delete the scan or I can choose realign there at the bottom.

[00:26:00] You can see now I can just click with a single finger and move that around. You can see how the two maps one over the other, so you can use that as a reference to align, and with two fingers you can rotate. That's how you would go ahead and align and then once you're done, you just hit the green check at the bottom. It goes through the whole alignment process once more and then hopefully, is successful and then you're done. That's [00:26:30] pretty much it for that. What also I wanted to show here is this.

Let me get out of here real quick. This is that model. What I did here, I was using the Insta360, and I had a lot more control over things like HDR, the quality, the color temperature of my brightness. I also brought these into [00:27:00] Photoshop, and you can see this doorway is all blurred out.

We have a blur feature, I know, but here I was able to do it a lot more accurately and more quickly, and not only that, it's this little plaque on the wall right here, as soon as I step forward, I just Photoshopped our logo on there and I blurred this out. So you can do things and get creative with it and play around with it and have fun, and [00:27:30] everything that you do here, as Kirk said, becomes part of the 3D mesh.

If you are going to start throwing photoshopping couches and beds in here, Cortex will actually take that and convert it into part of the 3D mesh, into 3D data, which is very cool. A lot of flexibility empowered in this tool. I think this is really exciting.

[00:27:57]
Kirk Stromberg: - I think we've got more [inaudible 00:27:57] examples. As you saw it live in the video there, [00:28:00] as Amir was moving the second scan around, you can see a piece of that fragment going. Amir, if we go to the next slide, I'll just show yet another example, maybe a little bit more stark example of a realign example.

Here on the left, this is an example where our system thought scan eight was in the right spot, but this is a sequence of eight shots across the front of a building and scan [00:28:30] eight is supposed to really be up at the top above number seven.

Basically, just like you saw in the video, I tapped scan eight, I selected the realign scan option from the menu where I'm previewing it. Then, in this instance, I need to drag it and rotate it up. In the third image, I've slid it much further than it really needs to be, but you can see the preview fragment from just that scan is coming along with it.

When you're using it, you can slide this all around, [00:29:00] and just like fitting a puzzle piece, you can match that up to the prior scans and say, I can see the lines for the parking spot, I can see where the curb and the sidewalk is, and I'll slide that down and I'll try to match that up as best I can.

Then I hit the "Checkmark" at the bottom. Basically, that tells the system, please go try to realign this but use what I've done as a hint to basically say, constrain yourself, don't look everywhere, just try to stay a couple of meters within this [00:29:30] and you will realign there.

I think that is it for the recap. Again, three big things, capture with your Android device, with the camera that you [inaudible 00:29:40] in your Android device, import 360s, use any source of photospheres to add to or create a new model, and then realign scans. Again, realign scans works with everything that you've got, all the cameras. In general, we try to avoid anything that's specific to one [00:30:00] kind of a camera or not.
Again, this goes back to the theme of giving you guys tools so that you've got more things in your toolkit, gets you more efficient, faster, get a better product out there for your clients. I think it's time for Q and A two. Got a bunch here.

[00:30:20]
Amir Frank: - Before we get into Q and A, I just wanted to mention that you can always reach out to our support team and get help [00:30:30] if you run into any issues. If you go to the Resources tab at the top of matterport.com, hit "Support" down there at the bottom, it'll get you to our support hub. Great place, a lot of frequently asked questions, links to videos, and so on and so forth.

You can also see them just above support as Matterport Academy, that's what I was mentioning before. If you want to get a lot of tips and tricks and how-to videos on pretty much all our tools, everything in workshop, if you're having issues with that, it's all in [00:31:00] there, so check that out. I believe that's it. Do stay connected to facebook.com/Matterport is our official corporate Facebook page.

All these Betas and things that we put out there, it's going to be posted there, so if you want to stay on top of it and know what we're doing, Facebook is the way to go. If you want to nominate your space for the gallery, check out go.matterport.com/nominate-your-space. I don't know if you've ever visited our gallery, but I highly [00:31:30] recommend it. It's very cool, you get to see a lot of places that you probably never been to.

Our next Webinar Shop Talk 30 we'll be covering all about BIM files, this new thing. If you're in the construction industry, this is going to be huge, so check that out November 10th. With that, let's go to Q and A. I'm going to stop sharing, actually.

[00:31:54]
Kirk Stromberg: - Started here, [inaudible 00:31:55] jumped the gun.

[00:31:56]
Amir Frank: - No worries at all. Now I can see the chat, everything [00:32:00] here. We got a lot of questions coming in, this is great.

[00:32:04]
Kirk Stromberg: - There's a couple that cluster in this similar topic. When you're doing captures with the camera in the Android device, yes, you can do more than one scan. Basically, it's exactly the same, just imagine you had a 360 camera or a Pro2, it's just that you're using the device in your hand. All the same rules apply. Multiple scans, cover the room, cover the space, connect them, either side of the doorways, [00:32:30] all the usual things.

Mark your reflective surfaces as you go to improve alignments, reduce chances of misalignment, so all good there. Let's see. Stitching problems with the ultra-wide versus the normal wide field of view. The question is, would I get normally fewer stitching problems if I'm using the ultra-wide on when you're doing Android camera-based scans? Absolutely.

[00:33:00] That's why if we get access to it, we bias to the ultra-wide because it's a lot easier on you. You need to take fewer shots. That also minimizes image stitching artifacts, so that's the parallax problem. If we're not really careful, then the more shots you got, the better chance you're going to have an image stitching artifact.

Since we're also converting that image into 3D, any stitching artifacts that we get from an image would also be projected into the 3D space. That [00:33:30] could lead to weird things where you might have a mesh fragment that makes it impossible to go through a doorway or you're measuring something and the results are looking really weird, and it turns out you hit the zero key and showcase and you see the mesh and you realize there's some funny fragment sitting there.

That said, again, unfortunately, in the Android world, not all the manufacturers make the ultra-wide camera available for everybody. This is the thing Google has been [00:34:00] working on in the Android ecosystem for a while to try to move all the manufacturers to expose all their hardware capabilities to all third-party camera manufacturers.

This is one of those really good examples of the differences between iOS and Android, just as ecosystems. You have a ton more choice, and it's very diverse in Android, but the downside of that diversity and expansion is that it's not as consistent like iOS is very clean, very consistent. You're pretty sure you know what you're going to get at any given [00:34:30] version of iOS.

[00:34:33]
Amir Frank: There's a question here. A Matterport Pro2 user is using iOS and likes the functionality of the import 360s as well as the realignment and rough timeline of when that's going to be available for iOS.

[00:34:49]
Kirk Stromberg: Yes, there's a couple of questions on that. Right now, we're running this data on Android. We didn't fully intend to catch up on iOS, and right now, the sequence is like, hey, we're going to do the shakedown cruise [00:35:00] on Android, and we'll bring it to iOS as soon as we can. I don't have a specific time frame, but obviously, if this gets traction popular and we think it should be, we'll bring it to iOS as well.

Ideally, we'd bring everything at the same time, but the nature is that you got small teams working on different things, so sometimes we'd rather actually get out into the world, start getting feedback and guidance from you before doing it on both platforms. This is a case where don't worry, iOS will follow.

[00:35:29]
Amir Frank: Exactly. [00:35:30] I just wanted to reiterate that obviously, we have all the intentions to have both applications have the same functionality. We're not biased towards one or the other, of course, but with software just in testing and then all that goes into it, especially with a complexity of something like Android, but the benefit that Kirk mentioned of their Beta testing abilities. There are going to be differences in when stuff is released, but all of it will [00:36:00] be released in both eventually.

[00:36:02]
Kirk Stromberg: A related one question concerning Pro2 users. Again, realign is available for absolutely every camera type, every source and so forth. Pro2, in terms of import 360, we've had some folks that have been able to download your 360s from workshop and made some adjustments and edits and then reimported them into Android. Again, this capability is any [00:36:30] source of 360 imagery. It's two to one aspect ratio, and I think it needs to be 960 pixels or more wide, so really tiny things are not going to work, but you can check out the support page from matterport.com/beta. But again, the idea is to not restrict the sources there, to just make those as expensive as possible.

[00:36:53]
Amir Frank: This definitely opens it up to being able to use any camera out there. Is there a limit [00:37:00] in the high end? I know you said small panorama images because I know some cameras are very high definition.

[00:37:08]
Kirk Stromberg: We've had some folks that have pulled in some really high. I don't think we've tried the gigapixel class kinds of things, but in general, at some point, since you're typically viewing these things on a mobile device or a computer screen, sometimes those extra pixel don't actually do any good. They don't really effectively make that much of a difference. We haven't capped with the high end per se, but [00:37:30] in general, if you start doing really large imagery and lots and lots of files, then you start to get into the performance limitations of the device itself. The sneaky thing about our world altogether is RAM really matters. It might be something you don't pay attention to on a phone because normally, it seems like the computer thing, but it's the working memory. It's the working capacity that the phone and its processor need to work with. The more RAM, the better; the higher-end [00:38:00] device, the better, the larger models you can make.

[00:38:03]
Amir Frank: Like you said, the higher the resolution is not necessarily going to get you a better end result. In the end, it's just going to require more horsepower from your machines, so why do it?

[00:38:14]
Kirk Stromberg: Again, hey, you got DSLR base panels. Great, shoot him in there because you know that you can get some really nice results from those kinds of tours, but in general, there's no upper cap. There's no hard limit we've put in place at the moment.

Now, we might find out, [00:38:30] "Oh wow. Shoot, we should've put a high-end limit because we're crashing," something like that. If we find out that there's performance issues around these areas, then we'll put in safety rails because nobody wants a crash. Chris asks, "Knowing these files are large, do they start [inaudible 00:38:47] your phone until you move them to computer? " Yes.

For us to operate on the imported files, we do need them on the device itself. If you're doing lots of things like that, you can import them [00:39:00] and you'll have that copy we're going to work with, and then you may want to be cleaning up your local storage and getting them offloaded after you've already done the job and you've imported them and uploaded them. That's a good call out.

[00:39:13]
Amir Frank: This is an interesting question. "In a dark room, can a phone's flashlight work with the app while scanning?" I know we've done this. Insurance uses the Pro2, and they strap on some hot lights, not a flash per se, but a light that's [00:39:30] constantly [inaudible 00:39:30] like a flashlight thing or an LED panel ideally. I suppose you would need something that the phone's light, we're not able to take advantage of the phone's light itself.

[00:39:45]
Kirk Stromberg: My standard answer with all things Android is it may depend just because there's so many variations. Now, classically, both iOS and Android, if you're accessing the camera [00:40:00] sometimes either you can't actually control the flashlight because the camera controls are trying to control the flash. I haven't tried that per se in quite a long time, so I don't know exactly a definitive answer.

[00:40:16]
Amir Frank: It may force it off.

[00:40:17]
Kirk Stromberg: I wouldn't hold my breath if that would work.

[00:40:20]
Amir Frank: I would stick to probably external light sources to illuminate stuff. Unlike the Pro2 where you can just strap on some lights on top, [00:40:30] I've seen people put three, four lights around the tripod that they're using underneath, and that can light up the full 360 as the camera is rotating. If you're hand-holding, that's not going to work. I don't know. Strap it to your chest. What can I tell you?

[00:40:48]
Kirk Stromberg: Yeah. Okay. We've got the flashlight question. Let's see. A couple of questions around using phone versus a Rico 360 versus a [00:41:00] Pro2 and outdoors. Let's separate these two things into your active depth cameras and not active cameras. An active depth camera is the Pro2 or the BLK 360 or an iPhone 12 Pro or i13 Pro with Lidar.

They have active depth sensors in addition to your camera and they're actively sensing the depth. In the outdoors, as we all know, the near-infrared can sometimes [00:41:30] interfere with the sensors on the Pro2. That's why if you're doing a lot of outdoor scanning with the Pro2, you may either need to do a 360 view and place it and then convert it with Coretex to turn into 3D, which doesn't care about the near-infrared.

Or you use Play 360's or you scan during more benevolent time. Golden hours, in the morning and evening or something like that where there may not be as much near-infrared. With the 360 cameras and with Matterport for Android where you're just [00:42:00] using the imagery, these are all Coretex space. They're all non-active depth cameras. They're not sensing the depth environment.

They're taking the imagery and then using a deep learning network to project that. It's not photogrammetry. Photogrammetry is typically where you take a lot of shots and then stitch it together. Those are different techniques people can use to get 3D out of just imagery. But in general, this Matterport for Android will work the same way a 360 would. You can take a shot outdoors, It's going to convert it [00:42:30] to 3D.

It's going to attempt to alignment. The alignment process outdoors with those conversions sometimes can be a little touchy. That's also why we want to make sure we've got realigned into play here. In terms of people asking, do I need to have more captures with a phone versus a 360 camera? Generally, no. Just imagine that you are simulating a 360 camera with your phone.

I mean, that's what we're doing as you rotate around, you are essentially trying to get a photosphere [00:43:00] with your phone. You can see this for a long time, the Google Street View App has done this to post directly to the street view, but their approach is typically more long-range oriented and they're not using the ultra-wide.

You got to take a lot of shots. I think that's the outdoor side. Again, anything that you're importing, any imagery you're importing, at that point, it doesn't really matter whether it was taken inside or outside. We're [00:43:30] basically taking the image. We're going to try and project depths from it based on our deep learning network. There shouldn't be any interference there in terms of where it was taken.

[00:43:43]
Amir Frank: -I just wanted to point out that while outdoor is possible, it's going to work a lot better when you're relatively close to structures, walls, and things that you would normally find indoors; patio furniture, stuff like [00:44:00] that.

Those items are more easily identified by Coretex because it knows what they look like, it's familiar with them. That's how the system works. It's based on a database, essentially, that is mostly indoor. With the Pro2, outdoor photography is challenging because of the infrared issues that Kirk mentioned with 360s and Android or SPC or smartphone capture.

The challenge comes in using Coretex [00:44:30] to convert and its knowledge base. It's obviously getting better all the time as we get more scans using the BLK and Pro2 outdoors, but right now, just be patient with it.

[00:44:47]
Kirk Stromberg: -Roger was asking around zoom versus realign You didn't see it in the example here, but yes, you definitely can zoom in and out over realign. In fact, I find that's actually fairly useful and important to do. If [00:45:00] you've got a large job, you see that misalignment, you start to do your realign workflow and you can zoom in and out.

One of the things that's a little tricky, and you may see this is that where you place your fingers to zoom in and out if you're landing on top of the puzzle piece that's attached to the scan of your realigning, you may find you're instead moving the realigned scan around versus zooming in and out. Just try to move your fingers around a little bit.

We're talking about [00:45:30] different ways to try to make it very clear where that realign scan fragment is in terms of the piece of the puzzle that we're showing you. Another rule of thumb would be, well, before you start the realign workflow, zoom into a reasonable set.

If the misalignment's really bad [inaudible 00:45:48] scan way across the room then yeah, you may have to zoom out. It's hard to realign, drag it over to the right spot, then zoom in, then give your final adjustments, your rotation and your placement. [00:46:00] But you should be able to do that.

I've done that no problem. It's just sometimes you hit the fragments and you got to move your fingers around a little bit to find the right spot. Let's see. Some Import questions. Yes, if you have a prior project, then you go off and you take 360s, you can add those to an existing project. You don't have to start [00:46:30] from scratch.

Basically, you've got both options. Just as if you had a project and you went back to the site and added scans to that project, exactly the same rules apply. You could just instead select Import and pull them in after the fact. Then if you've already uploaded that job, you would upload that again to get a new model with the new content that you added. Questions [00:47:00] around selecting compatibility with phones.

In general, the rule of thumb is a lot of Android devices have great fonts. I mean, the thing that's really cool these days is that as we all know, smartphones have got a lot of cameras these days. A lot of them lead with the camera in terms of its primary asset. In general, when we do captures, we're doing auto [00:47:30] HDR, auto white balance, trying to capture in highest resolution possible for that particular camera.

The ultra-wides typically have a lower megapixel count than the main camera, but we feel the trade-off for ease of use and minimizing stitching with the ultra-wide is a better thing to bias towards, especially since a fair amount of traffic in terms of viewing models is still on mobile. That seems like a reasonable balance.

We have a lot of folks that have been using, [00:48:00] Galaxy 10s devices from 3, 4 years ago, easy. Though the further back you go in time, the less likely that you'll have access to the ultra-wide. For instance, the Galaxy S9 doesn't have an ultra-wide, the S10 does. That's, I think, three flagships back now at this point. In general, again, the more RAM, the more modern device [00:48:30] you can, the better off you'll be if you're shopping for a new device.

[00:48:36]
Amir Frank: I did just want to point out for anybody interested in importing 360s. It has been image processed. There is no way of turning off any post image processing on the Matterport side. We look at not only the one scan position when trying to stitch and even out all the exposure and what not. We've got a window over here, this one is going to be dark [00:49:00] and another shot over here is going to be very bright, so getting those to match is not easy and a lot of image processing goes into play when doing that.

Then you've got all the other scan positions that are in that range. We don't want you to have an experience that you move from one scan position to another and all of a sudden everything looks radically different just because of the color temperature. That too is taken into account. A lot of image processing goes into play to make that experience of their digital twin [00:49:30] as continuous and smooth and I guess good as possible.

Yeah, if you went to the trouble of doing all this work too and you're looking for this specific mood in your images to create digital twin, it's probably not going to come out exactly like that in the final version. Something is going to happen. It's going to get brighter and so on.

[00:49:58]
Kirk Stromberg: Yeah and in general, we [00:50:00] are trying to stay away from adjusting from the on input path. We're trying to avoid dramatic changes to that, because we know that you may have made tweaks yourself. We're trying to find the balance there in terms of balancing the cross the entire space and from all these different angles, versus just leaving is as is. There's a couple of questions in here around mixing cameras on a job.

You can absolutely mix cameras on a job. You could be [00:50:30] scanning with the Pro2 and flip over to 360, flip over to an iPhone. On iOS side, we support the BLK360, we don't support that yet on the Android side. But you can totally mix all those things. In general, one rule of thumb there is with mixing is right tool for the right job again.

For instance, folks are using a BLK360 for really large spaces, making sure picking up 3D depth up on the ceiling of large hangers and warehouses. [00:51:00] If you then need to do a lot of intricate stuff like offices or quarters or shelving, typically flip over to a Pro2 or 360 because the image caught is generally better than a BLK360. It's also just a lot faster.

Again, this is what's the right tool for that portion of the job? For instance, one of the specific questions that as well, could I do a Pro2 and then go outside with an iPad Pro that has Lidar or an iPhone Pro [00:51:30] with Lidar and scan that to try and get active depth? Yes, you can. Absolutely.

For the iOS LIDAR devices, the iOS LIDAR sensor has a pretty narrow field of view. When you're capturing with that, imagine we're using the ultra-wide, the depth we're getting is this narrow field of view in the center portion of your rotation with the Lidar sensor and everything above and below that, we project the depth with Cortex.

We've been pretty pleasantly surprised that when [00:52:00] you look at the actual mesh for the point clouds from those captures, it's not a giant stark difference. This is why we've been very happy to be able to train Cortex on the existing canons from Pro2 and from BLK models. Then basically what you're getting with Cortex conversions is actually a pretty decently high fidelity representation of the space. Then again, rule of thumb, active depth cameras generally are going to give you the best results just because you're actually [00:52:30] sensing the data.

[00:52:32]
Amir Frank: Yeah, John did ask us getting outdoors is better or worse with BLK360? I don't know that I would say anytime you scan outdoors, it should or should not be with BLK360. 2D image quality alone, the BLK360 is not going to be as high definition, HDR is not as good as something like the Pro2. [00:53:00] Given the right time of day, which is basically civil twilight, this sun at or just below the horizon, the Pro2 does a very good job outdoors and it'll get you stunning image quality. I would think about what the purpose of the scan is.

What do you need it to do? What you needed for? BLK is a very focused device. In the products that we support that you can use to create Matterport miles, [00:53:30] it's going to be second to none as far as its measurement accuracy.

If that's very important to what you need, go with the BLK. If it's 2D image quality and looking for this really good-looking, luxurious walk through experience and measurement, maybe not as important, scanning with the Pro2 outdoors when the sun is just below the horizon might take a couple of days because you only have that half an hour time period [00:54:00] there, but that'll result in a very stunning image quality and digital twin. Just keep all that in mind.

[00:54:11]
Kirk Stromberg: Another question, Daniel is asking about the different ultra wide lenses on different phones. That's a really good one. The different manufacturers of Android devices, just because they have ultra lights, some of them are different fields of view and so our Computer Vision Code understands and looks [00:54:30] at what hardware the device is got and then provides those targets for you as you rotate around.

It automatically compensates so you don't have to worry about doing anything manually. Basically just follow the dots and get those captures. That means that on maybe one device, you take one fewer shot than you do on a different device because that device is got a wider field of view, you can get more overlap. Again, we'll do that for you automatically. You don't have to worry about that. Again, the whole [00:55:00] focus there is get a bunch of shots, minimize stitching artifacts, make it as easy as you can given the method for you.

[00:55:09]
Amir Frank: John is using from modeling and revit. Once I download Matterpack. Modeling and revit I would assume that measurement accuracy is a concern. Again, I don't know what the use case is, but I've scanned with the BLK outdoors, it's brilliant, [00:55:30] it's definitely a little bit slower, but because of the laser, it's going to perform very well in those terms. What else we got here? We got maybe a couple more minutes. Can we get a couple more in here?

[00:55:45]
Kirk Stromberg: Yeah. Any phone that takes fewer than six shots for simple scan? I don't know of any. I think right now, the latest Samsung flagships have the widest [00:56:00] field of view. We don't because we also need to overlap a little bit at the end too. I think you're probably not going to find anything less than that. Remember with ultra lights too, you also have a little bit of distortion coming from that. We're trying to balance the number shots, the overlap and you don't want the end result to distort them as well. I think this is one of those things.

You're using a phone's camera to [00:56:30] try to capture most of the photosphere and there's just physics involved in terms of trying to capture that and rotate. Again, like I think one thing that you might have mentioned too, if you're doing larger jobs and you do need to use smartphone capture, we found also like a mono pod is really helpful. I know I've showed you that on this help video and you guys can find those on supports side.

You want to be rotating around the camera, the camera is the axis [00:57:00] so, if you hold that on a monopod it helps remind you, okay, I'm rotating around that. Because the classic thing I think we all know is that when taking a normal pano, I'm a tourist at the Grand Canyon, I'm taking that pano, I typically rotate around myself. That's exactly what we don't want to be doing when we're trying to create a photosphere minute by stitching.

When you're outdoors, it doesn't matter as much because everything is so far away. But when you're doing indoors, that it does matter and those stitching artifacts will show.

[00:57:28]
Amir Frank: Yeah, monopod helps. [00:57:30] Even a tripod, if you're going to go and do an entire home with smartphone capture with your Android or iOS, throw it on a tripod with a pan tilt hand, no problem. It does a great job. You don't have to go that route, just makes it a little bit easier.

[00:57:53]
Kirk Stromberg: I think we're at time in general, for all these data features go to Matterport.com/beta and you've got [00:58:00] individual links for each of them to give you some more information about them. That will help and that's working.

[00:58:09]
Amir Frank: Yeah. That's it. That was awesome. Thank you very much to everybody who attended and for all the questions, I had a lot of fun answering them and if you have more questions, keep them coming. Like I said, our support channels are always open and try and do these Shop Talks once a month. The next one actually is [00:58:30] next week.

This time we pushed it up. With that said, thank you so much to Kirk for joining us and sharing your fast amount of knowledge on Android SPC. Good job by the way on the product. It's amazing.

[00:58:49]
Kirk Stromberg: Thank you. Thank you everybody for participating in the Beta programs and coming here ask all these questions so keep kicking the tires. Don't be shy about feedback which you like, what you don't like, and [00:59:00] we'll keep iterating.

[00:59:01]
Amir Frank: Yeah, absolutely. Feedback is always welcome. We love hearing what you think about the product and how we can improve it and make it to suit your needs. Thanks again and with this, as I mentioned, has been recorded so if you do need to revisit any of this information, we'll have it in the next day or two on-demand. If you just go to Matterport.com, you'll see it there in the resources, events and webinars, [00:59:30] and you'll be able to find it there. Thanks everybody very much and take care. Bye bye

[00:59:36]
Kirk Stromberg: Thank you.