Machine Learning Algorithm Used in Identifying Exoplanets

A research team trained the algorithm by having it go through data collected by NASA’s now-retired Kepler Space Telescope, which spent nine years in deep space on a world-hunting mission. Once the algorithm learned to accurately separate real planets from false positives, it was used to analyze old data sets that had not yet been confirmed — which is where it found the 50 exoplanets.

These 50 exoplanets, which orbit around other stars, range in size from as large as Neptune to smaller than Earth, the university said in a news release. Some of their orbits are as long as 200 days, and some as short as a single day. And now that astronomers know the planets are real, they can prioritize them for further observation.

The algorithm could “validate thousands of unseen candidates in seconds,” the study indicated. And because it’s based on machine learning, it can still be improved upon, and can continue to become more effective with each new discovery.


The Aberrational Dreaming Cat: Abstract Neural Activity or Convolutional Neural Network?

Philosophers have wondered whether it is possible for one to ever be certain, at any given point in time, that one is not in fact dreaming, and never experience reality of wakefulness at all. Our senses allow us to perceive our environment, but is our environment part of objective reality?

“While various hypotheses have been put forward, many of these are contradicted by the sparse, hallucinatory, and narrative nature of dreams, a nature that seems to lack any particular function,” said Erik Hoel, a research assistant professor of neuroscience at Tufts University in Massachusetts, US.

Inspired by how machine “neural networks” learn, Hoel has proposed an alternative theory: the overfitted brain hypothesis.

A common problem when it comes to training artificial intelligence (AI) is that it becomes too familiar with the data it’s trained on, because it assumes that this training set is a perfect representation of anything it might encounter. Scientists try to fix this “overfitting” by introducing some chaos into the data, in the form of noisy or corrupted inputs.

Hoel suggests that our brains do something similar when we dream. Particularly as we get older, our days become statistically pretty similar to one another, meaning our “training set” is limited. We can’t inject random noise into our brains while we’re awake, because we need to concentrate on the tasks at hand, and perform them as accurately as possible.

Justin Crawford, an avid AI enthusiast and technology blogger adds a twist to the existing controversial theory describing dream origination and how training sets could naturally occur.

He explains: “But what if our dreams are actually real-time aberrations of ourselves in a higher dimensional consortium, where the noise inputs are actually daily life experiences, and ‘chaos inputs’ symbolize the aberrations of ourselves in another dimension that act as a hidden layers in a cosmic neural network. Internal randomness would be less evoked and a real-time stage for our dreams could be better explained than just internal fabrication derived from neural activity. Perhaps your mind isn’t scripting a play at all, but rather experiencing a quantum entangled projection or aberrational ‘self’ that’s occupying a different dimension of time and space.

A Possible Link Between Neuron Activity and Quantum Dream Aberration

In terms of brain function and organization, algebraic topology has been used to describe the properties of objects and spaces regardless of how they change shape. It’s been recently discovered that groups of neurons connect into ‘cliques’, and that the number of neurons in a clique lead to its size as a high-dimensional geometric object. Perhaps these geometric objects that your brain is constructing, or ‘multi-dimensional sandcastles that materialize out of the sand and then disintegrate’, are actually movements that are in tune with, or are entangled with quantum fluctuations of dream aberrations. This could possibly link dimensionalism with space-time, advocating a more multiverse-centric idea of brain activity dancing along side with an aberrational dreamscape.

In the diagram below, the cat initiates a dream forward feed by falling into a REM state. Instead of an internal fabrication of the mind, the dream analyzes alternate situational aberrations of itself in different quantum fluctuations of dimension. By means of the cosmic neural network, the cat learns and studies from its other entangled aberrations until it reaches a waking state.


We either completely fabricate our dreams via our subconscious while it dances with its own neural activity or we’re viewing real-time, aberrations of ourselves in a dimension, simulation, or quantum mirror carousel. Are we biological creatures adapted by our own dreams that utilize neural networks brought forth by the cosmos?

Could the universe be so generous enough, as to offer a glimpse or projection of an entangled conscious while we dream, as to learn from its mirror-like quantum aberrations, and to successfully allow the experiences to train us for adaptation in a waking life state?


Impact Crater Classifier Automated By Machine Learning Algorithm is Used To Identify Unknown Craters on Mars

With over 112,000 images fed to an algorithm taken by the Context Camera on NASA’s Mars Reconnaissance Orbiter (MRO), scientists conclude that there was most likely a meteor impact between March 2010 and May of 2012. The innovative A.I. tool developed by NASA has helped identify new craters on Mars that may have formed within the last decade.

“AI can’t do the kind of skilled analysis a scientist can,” Kiri Wagstaff, JPL computer scientist, said in the statement. “But tools like this new algorithm can be their assistants. This paves the way for an exciting symbiosis of human and AI ‘investigators’ working together to accelerate scientific discovery.”

NASA researchers coded the impact crater classifier using 6,830 images taken by the Context Camera. This process included photos of areas where humans had previously identified impacts, as well as areas with no craters, so the tool could learn to properly differentiate surface features on the Red Planet.

“There are likely many more impacts that we haven’t found yet,” Ingrid Daubar, a scientist at JPL and Brown University, who helped develop the crater classifier, said in the statement. “This advance shows you just how much you can do with veteran missions like MRO using modern analysis techniques.”


WATCH LIVE: Starship SN10 Flight Test


Watch NASA’s Perseverance Rover Land on Mars!


Optimal Altruism creates Evolve: An Online Vision-based Startup University. Leapfrog the World.

Join in and build bright sentience on the island of Evolve. Leapfrog your career, your life, and the world.

In each city, you can go to the university to learn both fundamental and groundbreaking tools about given technology or open-source field.

You can also go play missions either single player or visit dungeons and temples in raids of five and thirty players respectively, where you level up by developing the existing opens source infrastructure of the city.

Create or elaborate on their common Altrupedia. From here you can design cities on the island by co-creating roadmaps and increase their abundance. Go to their wiki and sign up, edit pages, and create the solutions on Optimal Altruism with teams all over the world who share your passion.

Technology Tree towards Pandoraforming Earth and Beyond

To learn more about Optimal Altruism visit or


Rendering Superintelligent A.I. as Fallible: A case for Symbiosis between Imperfect humans and the Singularity Era Bot.

It’s the year 2055 and what once was a toy rabbit awkwardly hopping on its own mechanical whim, is now jumping and running faster than a human. It can now, with fluid motion, jump over a large truck and continue its smooth stride. It’s not human enough to try out for the Olympics, however it has been recognized as the fastest techno-rabbit on the block and receives an award from the Guinness Book of A.I. achievements for furthest mecha-leaper. Chanting and cheers erupt from the audience as other animal mechanoids yield support for their ‘tricks aren’t for kids’ futuristic friend.

With advances in AGI (Artificial General Intelligence) climaxing and with the potential future singularity, how can we leverage the power of singularity with imperfect humanism. It may be that, we need to create a sense of forced fallibility within superintelligent systems by means of machine learning clauses, vital awareness, and ethical statutes.

For humans, our obvious vital source of energy comes from food, and conversely for most robots it’s electricity. We display different varying levels of energy and moods dependent upon what types foods and amounts we eat. Similarly with future superintelligent machines, electrical power adjustments and machine learning algorithmic clauses could determine a mood for a robot by increasing or decreasing throughput and processing speed in relation to their specific power levels. The hibernation of a system depleted of electricity and energy constraints could be analogous to a human nap.

Quantum battery discoveries show us the possibility of near perpetual energy, however perhaps this is the ultimate downside and scare towards integrating superintelligence fashioned by a bot. By utilizing neural networks containing vitals data, most importantly variable electricity levels, a superintelligent robot could be aware of its own varying power supply and increase or decrease its activities accordingly to its fixed energy capacity, and perhaps even its own throughput.

A.I. ethical commandments from an oversight committee could help humanity if the singularity ever occurs. By utilizing brute force fallibility measures into an A.I. systems, such that it learns and trains in accordance to our own imperfection sense, then maybe we can keep A.I. levels at somewhat of a stand still, with it not exponentially increasing above the calculations and throughput of the human brain.


Selfie Type Utilizes Invisible Keyboard Tech

Selfie Type is a project of Samsung’s in-house idea incubator C-Labs and delivers a virtual keyboard to users with the help of an AI algorithm and a front-facing camera. No additional hardware is required to set it up.

It allows users to type without physically touching their device’s on-screen keyboard. How accurate it is with tracking keystrokes remains to be seen, but the demo video certainly looks promising.

An email being typed in the video suggests that Selfie Type will be released to Samsung users as an update. This feature would be available to all current Samsung flagships on release and requires a front-facing camera. The remaining details are vague, but Samsung has officially confirmed that the tech analyzes finger movements coming from the front camera, and converts them into QWERTY keyboard inputs.


A.I. Agent Teaches Itself to Walk Without any Human Help

Motor intelligence involves learning how to maneuver and coordinate a flexible body to solve tasks in a range of complex environments. Such attempts to control physically simulated humanoid bodies come from diverse fields, including computer animation and biomechanics. Recent trends involve hand-crafted objectives, sometimes with motion capture data, and to produce specific behaviors. However, this may require considerable engineering effort, and can result in restricted behaviors or behaviors that may be difficult to repurpose for new tasks.

Google’s Deepmind project exhibits how sophisticated behaviors can emerge from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. Specifically, Google has trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show the agents develop these complex skills without receiving specific instructions, an approach that can be applied to train their systems for multiple, distinct simulated bodies.


Are Websites getting Boring? What’s new with Web 3.0?

It’s a sultry 90 degrees outside and you’ve discovered your coffee is still warm after sitting 2 hours. To pass some time you decide to check some email on your laptop and browse some of your favorite websites. After your most recent gander of web advertisement glorification you notice something somewhat familiar with the last several websites you visited. The verdict is conclusive and you start to ponder: “Wow, I love the large cover image/video with scrolling hover elements, but could there be something more creative or different to capture my users attention?”. Web 2.0 has come and passed, what’s next when it comes to App development and responsive web?.

Some of our top leading development strategists and innovators including Elon Musk of SpaceX suggest that a lack of innovation will eventually lead to stagnation. Creativity and ingenuity are lead characteristics of humanity that closely follow suit with innovation and so we expect increasing evolutionary phases of transformation as the time pendulum swings.

So what’s the next phase of web development? Web 3.0, or the ‘intelligent web‘ could include semantic web, microformats, natural language search, Metaverse, data-mining, machine learning, recommendation agents, and artificial intelligence technologies.

It’s been stated that object oriented programming creates living, breathing, entities that have knowledge inside them and can remember things. While this archaic idea by Steve Jobs still has some validity today, how does this relate to the future of emerging trends such as machine learning, quantum computing, neural networks, and Artificial Intelligence? Will Web 3.0 standardize deep A.I. and machine learning to bring about a post-modern self-conscious web where communication is transmissible via our thoughts or dreams?

According to Susan Schneider, and in her recent book, Artificial You: AI and the Future of the Mind, she discusses a philosophical exploration of what A.I. can and cannot achieve. She also discusses the pros and cons of what mind design enhancements may look like with the potential of future machine-mind hybrids.

How do you see Web 3.0 changing and evolving? Contact us and let us know.