Machine Learning Algorithm Used in Identifying Exoplanets

A research team trained the algorithm by having it go through data collected by NASA’s now-retired Kepler Space Telescope, which spent nine years in deep space on a world-hunting mission. Once the algorithm learned to accurately separate real planets from false positives, it was used to analyze old data sets that had not yet been confirmed — which is where it found the 50 exoplanets.

These 50 exoplanets, which orbit around other stars, range in size from as large as Neptune to smaller than Earth, the university said in a news release. Some of their orbits are as long as 200 days, and some as short as a single day. And now that astronomers know the planets are real, they can prioritize them for further observation.

The algorithm could “validate thousands of unseen candidates in seconds,” the study indicated. And because it’s based on machine learning, it can still be improved upon, and can continue to become more effective with each new discovery.


The Aberrational Dreaming Cat: Abstract Neural Activity or Convoluted Neural Network?

Philosophers have wondered whether it is possible for one to ever be certain, at any given point in time, that one is not in fact dreaming, and never experience reality of wakefulness at all. Our senses allow us to perceive our environment, but is our environment part of objective reality?

“While various hypotheses have been put forward, many of these are contradicted by the sparse, hallucinatory, and narrative nature of dreams, a nature that seems to lack any particular function,” said Erik Hoel, a research assistant professor of neuroscience at Tufts University in Massachusetts, US.

Inspired by how machine “neural networks” learn, Hoel has proposed an alternative theory: the overfitted brain hypothesis.

A common problem when it comes to training artificial intelligence (AI) is that it becomes too familiar with the data it’s trained on, because it assumes that this training set is a perfect representation of anything it might encounter. Scientists try to fix this “overfitting” by introducing some chaos into the data, in the form of noisy or corrupted inputs.

Hoel suggests that our brains do something similar when we dream. Particularly as we get older, our days become statistically pretty similar to one another, meaning our “training set” is limited. We can’t inject random noise into our brains while we’re awake, because we need to concentrate on the tasks at hand, and perform them as accurately as possible.

Justin Crawford, an avid AI enthusiast and technology blogger adds a twist to the existing controversial theory describing dream origination and how training sets could naturally occur.

He explains: “But what if our dreams are actually real-time aberrations of ourselves in a higher dimensional consortium, where the noise inputs are actually daily life experiences, and ‘chaos inputs’ symbolize the aberrations of ourselves in another dimension that act as a hidden layers in a cosmic neural network. Internal randomness would be less evoked and a real-time stage for our dreams could be better explained than just internal fabrication derived from neural activity. Perhaps your mind isn’t scripting a play at all, but rather viewing a quantum entangled projection or aberrational ‘self’ that’s occupying a different dimension of time and space.

A Possible Link Between Neuron Activity and Quantum Dream Aberration

In terms of brain function and organization, algebraic topology has been used to describe the properties of objects and spaces regardless of how they change shape. It’s been recently discovered that groups of neurons connect into ‘cliques’, and that the number of neurons in a clique lead to its size as a high-dimensional geometric object. Perhaps these geometric objects that your brain is constructing, or ‘multi-dimensional sandcastles that materialize out of the sand and then disintegrate’, are actually movements that are in tune with, or are entangled with quantum fluctuations of dream aberrations. This could possibly link dimensionalism with space-time, advocating a more multiverse-centric idea of brain activity dancing along side with an aberrational dreamscape.

In the diagram below, the cat initiates a dream forward feed by falling into a REM state. Instead of an internal fabrication of the mind, the dream analyzes alternate situational aberrations of itself in different quantum fluctuations of dimension. By means of the cosmic neural network, the cat learns and studies from its other entangled aberrations until it reaches a waking state.


We either completely fabricate our dreams via our subconscious while it dances with its own neural activity or we’re viewing real-time, aberrations of ourselves in a multiverse, dimension, or simulation. Are we biological creatures adapted by our own dreams that utilize neural networks brought forth by the cosmos?

Could the universe be so generous enough, as to offer a glimpse or projection of an entangled conscious while we dream, as to learn from its mirror-like quantum aberrations, and to successfully allow the experiences to train us for adaptation in a waking life state?


Impact Crater Classifier Automated By Machine Learning Algorithm is Used To Identify Unknown Craters on Mars

With over 112,000 images fed to an algorithm taken by the Context Camera on NASA’s Mars Reconnaissance Orbiter (MRO), scientists conclude that there was most likely a meteor impact between March 2010 and May of 2012. The innovative A.I. tool developed by NASA has helped identify new craters on Mars that may have formed within the last decade.

“AI can’t do the kind of skilled analysis a scientist can,” Kiri Wagstaff, JPL computer scientist, said in the statement. “But tools like this new algorithm can be their assistants. This paves the way for an exciting symbiosis of human and AI ‘investigators’ working together to accelerate scientific discovery.”

NASA researchers coded the impact crater classifier using 6,830 images taken by the Context Camera. This process included photos of areas where humans had previously identified impacts, as well as areas with no craters, so the tool could learn to properly differentiate surface features on the Red Planet.

“There are likely many more impacts that we haven’t found yet,” Ingrid Daubar, a scientist at JPL and Brown University, who helped develop the crater classifier, said in the statement. “This advance shows you just how much you can do with veteran missions like MRO using modern analysis techniques.”


WATCH LIVE: Starship SN10 Flight Test


Watch NASA’s Perseverance Rover Land on Mars!


Optimal Altruism creates Evolve: An Online Vision-based Startup University. Leapfrog the World.

Join in and build bright sentience on the island of Evolve. Leapfrog your career, your life, and the world.

In each city, you can go to the university to learn both fundamental and groundbreaking tools about given technology or open-source field.

You can also go play missions either single player or visit dungeons and temples in raids of five and thirty players respectively, where you level up by developing the existing opens source infrastructure of the city.

Create or elaborate on their common Altrupedia. From here you can design cities on the island by co-creating roadmaps and increase their abundance. Go to their wiki and sign up, edit pages, and create the solutions on Optimal Altruism with teams all over the world who share your passion.

Technology Tree towards Pandoraforming Earth and Beyond

To learn more about Optimal Altruism visit or


Rendering Superintelligent A.I. as Fallible: A case for Symbiosis between Imperfect humans and the Singularity Era Bot.

It’s the year 2055 and what once was a toy rabbit awkwardly hopping on its own mechanical whim, is now jumping and running faster than a human. It can now, with fluid motion, jump over a large truck and continue its smooth stride. It’s not human enough to try out for the Olympics, however it has been recognized as the fastest techno-rabbit on the block and receives an award from the Guinness Book of A.I. achievements for furthest mecha-leaper. Chanting and cheers erupt from the audience as other animal mechanoids yield support for their ‘tricks aren’t for kids’ futuristic friend.

With advances in AGI (Artificial General Intelligence) climaxing and with the potential future singularity, how can we leverage the power of singularity with imperfect humanism. It may be that, we need to create a sense of forced fallibility within superintelligent systems by means of machine learning clauses, vital awareness, and ethical statutes.

For humans, our obvious vital source of energy comes from food, and conversely for most robots it’s electricity. We display different varying levels of energy and moods dependent upon what types foods and amounts we eat. Similarly with future superintelligent machines, electrical power adjustments and machine learning algorithmic clauses could determine a mood for a robot by increasing or decreasing throughput and processing speed in relation to their specific power levels. The hibernation of a system depleted of electricity and energy constraints could be analogous to a human nap.

Quantum battery discoveries show us the possibility of near perpetual energy, however perhaps this is the ultimate downside and scare towards integrating superintelligence fashioned by a bot. By utilizing neural networks containing vitals data, most importantly variable electricity levels, a superintelligent robot could be aware of its own varying power supply and increase or decrease its activities accordingly to its fixed energy capacity, and perhaps even its own throughput.

A.I. ethical commandments from an oversight committee could help humanity if the singularity ever occurs. By utilizing brute force fallibility measures into an A.I. systems, such that it learns and trains in accordance to our own imperfection sense, then maybe we can keep A.I. levels at somewhat of a stand still, with it not exponentially increasing above the calculations and throughput of the human brain.


Could Dream Symbol Datasets Recorded from Human Dream Aberrations Help Drive Predictive Modeling for Sentient Intent?

Your alarm sounds, and after waking up from deep sleep you feverishly blog several symbols from your dream. In this case the dream symbols were: flying, trees, moon, and rabbits. Upon pressing snooze, you enter back into even a deeper REM state of sleep and this time you have a completely new dream with different associated dream symbols. Now your dream symbols are: tornado, barn, mouse, and thunder. They all sound random, but perhaps there is a pattern.

Those who are deeply seeded into A.I. technology are familiar with Natural language processing which is concerned with the interactions between computers and human language and how to process and analyze large amounts of data. This could lead into a new subfield of A.I., that includes the study of relationships with dream blogging and its interpretative data intersections between language and symbols.

To take it one step further, could we use this summation of data to supply a type of learning, sentient, pseudo-awareness and have these systems observe their other community agents while they themselves are seemingly un-active or in sleep mode. Could a agent arrive at intuition via machine learning upon reviewing the iterations and actions of their other A.I. counterparts in simulation environments? The verdict isn’t quite conclusive that a sentient may be able to have intent, but as A.I. technologies move forward the possibilities may become clearer.


Selfie Type Utilizes Invisible Keyboard Tech

Selfie Type is a project of Samsung’s in-house idea incubator C-Labs and delivers a virtual keyboard to users with the help of an AI algorithm and a front-facing camera. No additional hardware is required to set it up.

It allows users to type without physically touching their device’s on-screen keyboard. How accurate it is with tracking keystrokes remains to be seen, but the demo video certainly looks promising.

An email being typed in the video suggests that Selfie Type will be released to Samsung users as an update. This feature would be available to all current Samsung flagships on release and requires a front-facing camera. The remaining details are vague, but Samsung has officially confirmed that the tech analyzes finger movements coming from the front camera, and converts them into QWERTY keyboard inputs.


Could our Dreams Allow us to Gaze into Alternate Aberrations of Ourselves in a Conscious Universe?

Philosophers have wondered whether it is possible for one to ever be certain, at any given point in time, that one is not in fact dreaming, and never experience reality of wakefulness at all. Our senses allow us to perceive our environment, but is our environment part of objective reality?

Quantum fluctuations and multiverse theory suggest the possibility of multiple iterations of “you” performing different objectives at the same time. While you drove to work this morning, another version of you went to the park and took a walk. The objective reality that we experience is only a fraction of an infinite amount of possible outcomes.

Consider a raindrop and its propagating waves after a collision. Imagine the raindrop is your observable conscious system, and that each wave that it creates is an iteration of your reality. Perhaps during our REM sleep stage, which allows us some of the most intense dreaming, your current dream state isn’t a fabrication of the mind, but an observation of an alternate iteration of you in another place in space and time. Perhaps this dream state awareness or pseudo-simulation of ourselves is actually a training neural network type system that allows us to function more efficiently in our waking state.

Could datasets from dream blogging or dream recollection allow a predictive model of a more efficient self? The correlation between dreams and waking life focus could introduce a new era of self improvement and give credence and pathways to self-conscious AI.