Yeah that is real impressive, I thought we would be years away from the realistic meta avatars becoming usable on actual hardware. I don’t really understand what’s so metaversey about this though, Apple showed off the same concept with no ties to the idea of metaverses.
EDIT: I watched a bit more and hearing Mark’s ideas of the metaverse he wants honestly sounds very similar to the plans and sdk’s for the Apple Vision Pro. I don’t know if this is truly what the future will be, but it seems exciting.
I got the cartoon with text wrong because I didn’t know dalle-3 could do text. I got 12 from vibes of more detail? This assumption that AI does more detail gets a lot of actual humans. AI image generation is really crazy. The simpler stuff is pretty much indistinguishable from real human stuff at this point.
If you have an Alt-graph key you can type more glyphs by hitting it while typing a key. Ctrl+Alt also acts as an alt graph key. Shift also changes the glyphs. On windows you need to switch to the English International keyboard for the alt-graph key to do anything if you are on the English US layout.
Ø
you can get this glyph by typing Alt-graph+L
Those words are up for interpretation. Vocal synths and regular synths would make synthesized content in a sense. And they would be provided by music streaming services. Hopefully its more well defined in the actual law.