Live in Metaverse
As we saw, many of the big Tech Companies in the world started hinting that the few coming years we will be able to live in The Metaverse or many called Web 3.0.
But what does this last actually mean ?
Well when we say Metaverse, we think of a virtual space where people can work,shop,socialize, in short do everything we can do in real life inside a virtual world.
Layers of the Metaverse
As the physical space becomes dematerialized, the constraints physicality brings will be lifted. Hence, the metaverse will provide us with an abundance of experiences that we are not able to enjoy today.
This is one of the reasons why the biggest brands are already focusing on massive interactive live events (MILEs). These events hosted on platforms like Decentraland and Roblox give us just a basic idea of how the metaverse can make immersive events accessible to everyone. Can’t buy a front row ticket to a real-life concert? In the metaverse, all tickets will be front row.
In the metaverse ecosystem, the discovery systems of inbound and outbound continue to exist. Inbound discovery occurs when people are actively looking for information. Meanwhile, outbound refers to the method of seeking to push messages out to people whether or not they asked for it.
Some of the aspects of information sharing will be crucial in the realm of Web 3.0. Community-driven content is crucial for metaverse marketing. The rise of content creation we are witnessing in the influencer era will be shared within the metaverse context more and more. Recently, we’ve seen its early examples in the forms of non-fungible tokens (NFTs). Becoming one of the hottest topics of 2021, these digital assets started being widely used by the brands as a marketing tool. This is a great way to boost community engagement and it will advance much more in the metaverse.
Real-time presence will also be central to discovery. Video game services such as Steam and Xbox are already allowing gamers to see what their counterparts are doing in real-time. Few years ago, the music streaming platform Spotify added a feature where the users can see what their friends are listening to at the moment. Most recently, Twitter has launched Spaces as a tool for live audio conversations. These types of social interactions will be possible in the metaverse through various shared experiences.
Earlier versions of the internet required some degree of programming knowledge for creators to design and build tools, apps or asset markets. These days, thanks to the web application frameworks, developing web applications is possible without coding. As a result, the number of designers and creators on the web is increasing exponentially.
In the near future, everyone will be able to become a creator on the web without having to spend hours learning programming. This dramatic increase in the number of creators is what defines the economy of Web 3.0, or the creator era.
In reality, we’ve already started witnessing the rise of the creator economy. Think about Youtube: In the early days, there were few big Youtubers getting millions of views. They generally created content such as sketch comedies, tutorials or vlogs. Now, millions of others are able to make videos about a variety of subjects, no matter what the size of their audience is. TikTok gave the same opportunity to an even larger population. In this new market, the consumer can also easily become the creator.
Metaverse will enable people to find their niche instead of sharing the same experience with millions of others. The experiences provided by the creator economy will not only be immersive, social and real-time; but they will also be also highly personalized.
Spatial computing is a term used to describe a tech solution that merges virtual and augmented reality. According to Radoff, spatial computing helps us manipulate and enter into 3D spaces. It allows us to digitize objects using the cloud, enable sensors to react with motors and digitize the physical world around us through spatial mapping.
Now, more than ever, we’re able to blend the virtual with the physical world. Microsoft’s HoloLens and Snapchat’s Landmarker are great examples of what we can do with this technology. And even if you haven’t been able to get your hands on Hololens or Landmarker yet, think about the face filters on Instagram that are being used everyday. Or the massively popular 2016 game, Pokemon GO. All of these were possible thanks to spatial computing.
Some of the key aspects of this layer include 3D engines such as Unity and Unreal. Moreover, geospatial mapping through Cesium, Descartes Labs, and Niantic Planet-Scale AR helps with mapping and interpreting the inside and outside world.
Data integration from devices (Internet of Things) along with biometrics from people is already widely used in health and fitness industries. Lastly, voice and gesture recognition is also included in the software that is spatial computing.
As opposed to its fictional counterparts in Snow Crash or Ready Player One that are both ruled by single entities, the real metaverse is expected to be devoid of a single authority. This makes decentralization one of the key features of the metaverse, along with being open and distributed.
When alternatives are maximized and systems are interoperable and constructed within competitive markets, experimentation and growth skyrocket. Moreover, creators become the sovereigns over their own data and products.
The blockchain as well as smart contracts, open-source platforms, and eventually the possibility of a self-sovereign digital identity are all parts of the decentralization process. More and more, distributed computing and microservices enable a scalable ecosystem for developers to access online capabilities.
Everything from commerce systems to specialized AI to a variety of game systems are all becoming available without having to worry about constructing or integrating back-end capabilities.
The key aspect of the hardware layer of the metaverse is human interference. With the combination of spatial computing and human interface, we’ll soon be able to gather information about our surroundings, use maps and even create shared AR experiences by just looking around at the physical world. As the technologies get smaller and highly portable, they will become closer to our bodies; turning us into cyborgs. We’ve already started this process through smartwatches and smart glasses.
By using haptics, we can control our electronic devices mid air, without having to touch buttons or a screen. Some experimental models also include a feature where the user can feel the texture and shape of a virtual object.
The seventh layer includes the technology that makes everything that is mentioned above become real. Ultimately, for all outer layers to exist we need technological infrastructure consisting of 5G and 6G computing. These will massively improve bandwidth and reduce network contention and latency.
Moreover, for the devices mentioned in the human interference layer to work efficiently, we need tiny hardware that is powerful. According to Radoff, these include semiconductors that are approaching 3nm processes and beyond; microelectromechanical systems (MEMS) that facilitate tiny sensors; and compact, long-lasting batteries.
While this seven layered explanation is great for a general understanding, it seems like we still have a lot to learn about the metaverse. Of course, we first need to develop the technology that will make up the infrastructure. Then, it will be a game of figuring out what works and what doesn’t. Still, one thing is for sure: this new technological frontier will massively revolutionize how we live and think.
Here’s What 24 Hours in VR Feels Like
Next week our new article… Why is DATA vital for the metaverse ?
AI is certainly going to be the backbone of the metaverse. It is vital for ensuring the interpretation of real-time data, and providing actionable results. The metaverse will rely on quick, real-time responses and events, in order to feel immersive and enjoyable.