Video of current progress
This is video showing that we were able to make our headset to crease the mesh around the room and to track hand positions. The three spheres in the screen are the positions of index finger, thumb, and wrist. Though it seems misaligned in the device stream, the spheres were properly aligned when looking through the headset.
Did this week
1. We spent a few days trying to resolve the issue of not able to connect The Lab to Unity. We were not able to start our development earlier because of this issue. Ultimately, team 4 who are also developing on Magic Leap 1 provided us with the solution of deleting the x64 folder and reimporting it before starting Zero Iteration.
2. We also made the effort to try to connect with Magic Leap engineer Joerg to resolve the above issue and to set up meeting times. Unfortunately, he would no longer be assisting us.
3. We spent time going over the basic Magic Leap development tutorial to familiarize ourselves with the new development platform. Since this is the first time that all of us get to develop with Magic Leap, we thought a good way of familiarizing ourselves with the new environment would be following basic tutorials before focusing on relevent API documentations related to our project.
4. We were able to start developing on Magic Leap 1 and made some achievements toward our milestone. Our first step was to do mesh rendering and hand tracking. We were successful in doing both this week.
5. We started to work on the code to detect collisions between hands and surrounding mesh, and we will continue our development next week.
Tasks done by each member
- Kai-Wei: dealing with environment setup issue, went through intro tutorial and hands tracking tutorial, integrated hands tracking with current application, started looking into collision detection
- Cheng: setting up environment and assisted in debugging, went through intro tutorial, started looking into mesh coloring
- Pei Lin: setting up environment and assisted in debugging, went through intro tutorial, started looking into collision detection
- Sujie: setting up environment, went through intro tutorial, started looking into mesh coloring
Plan for next week
1. We need to follow up on TAs on how to properly render the headset view in Unity game mode. Currently our development would not be hindered by this, but we are unsure if we would be relying on this feature in the future.
2. Two of us would finish collision detection.
3. The other two members would work on mesh coloring. Currently, the mesh are colored automatically (though we are unsure the actual meaning of default coloring), but we would want to remove that feature with our own. Once the collision detection is done, we would then merge this with hand collisions.
Progress towards MVP
Though we had issues setting up our environment, we are able to start developing as of now. Our MVP is to color mesh with hand collisions and is due in two weeks. We anticipate that we can finish developing collision detection and mesh coloring next week, and we would spend another few days merging the code together. We are currently on track towards delivering our MVP on time.