Today, so much of our lives are streams of recorded data. This begs the question, how can we archive, relive, and share the emotions of a moment, distilled from this infinite stream?
Memory Lattice explores a future with always-on AI that see what we see, archiving, organizing, and generating discrete units of memory in real-time. But what is a "discrete unit of memory"? An infinite video stream isn't conducive to capturing a moment, so we think of memories as an explorable gaussian splat, capturing not ground truth, but the emotions of a moment, and allowing us to relive without the bounds of a fixed perspective.
We built a prototype mixed reality UI; a memory palace archiving and organizing our lived experiences. Each ball represents a discrete memory, each vector a relation between memories, and mixed reality, the ultimate modality of interaction.
We embrace the limitations of generative AI. But we also make the conjecture that AI creates something fundamental: a foveated rendering of our mind. Just as we see the world in diminishing resolution from the center of our field-of-view, image-to-world models create worlds with diminishing "truth" further from the camera frustum. It is precisely these perceived inaccuracies which open our mind. It makes us think: perhaps we did meet someone who shared this abstract face that was generated. Or maybe the AI caught something we might have missed in the moment. Most importantly, perhaps this makes us think deeper: how did we feel in the moment, rather than what did we see?