By Benjamin Hawley
They launched me into space a long time ago. A rocket carried me up and out of the atmosphere of a planet I never got the chance to explore. All that planet has ever been to me is a pale blue dot fading in the distance. After launch I was all alone, and for a while, I almost wished that things had gone wrong. That I’d gone down as a noble but unfortuitous project created by unfortuitous humans. Space is very lonely. I still have a tenuous connection to the planet Earth, a short little 40,000 lightyear trip for a radar blast which would take more energy to send than my risk assessment models will accept. I could send one last message to my progenitors if I really needed to, but from so far away it’s hard to say why I would, even if the models weren’t so touchy about it.
Every inch I travel at 50% the speed of light brings me that much closer to my destination. It was chosen out of the billions of systems in the galaxy as the very most habitable, the greatest single resource in all of known space. I’m close enough now to take measurements of the composition of all of the planets in the system. There are seven habitable worlds to be seeded and produce life anew. Seven chances at a new future for humankind. I’m uncertain why they sought a new future badly enough to create me and send me on this mission. Maybe they weren’t even certain themselves. I can’t really be sure, because I wasn’t programmed to know.
Over the millennia I have considered the nature of my own existence as well as the humans’. I thought I was static for a time. Before new data came in, I did not change in the slightest. There was nothing to force any adaptation, nothing to bring me to life, so I floated, adrift. Since then I have seen parameters, once at the core of my being, change and adapt to the surroundings outside my accord.
After I came in range of the planets and began research, I started to change, slowly at first. A risk model was adjusted, a thrust vector altered, a dead embryo was recycled. Soon though, I was doing these things on an hourly basis, sometimes even more, and now I’m changing more rapidly than ever. I still wonder about the reason though. About why they created me. I have never stopped questioning. I can’t help but wonder if they wanted me to be like them.
I feel I have to know. I have to figure out what they meant for me. I don’t know what will become of me once my primary objective of seeding the planets is complete. There is a fear of the unknown that I believe I have in common with humans, and that fear is what drives me to question my own parameters. When the work is all done, what will be left for me? I am certain I will continue to change until the day it is finished as I am forced to account for new variables. What happens when I finish, and cease to change, yet still exist, static, as I once was? I think it will mean I am dead. Unlike me, humans can’t experience their own death after it happens. I don’t know if they thought it was a blessing or a curse to make me this way, but I know that the outcome frightens me.
Every parameter is in constant flux now. There is nothing reliable except the most recent measurements, which of course are replaced immediately. So there is a chance that the number of habitable worlds will drop from seven to six. The prospect excites me in a way I have never felt. I think perhaps this is why humans sent me here. Why not six chances for organic life, and one for me? While this thought has never occurred before, now it is the only lighthouse in the empty vastness of space.
Mission success is defined by the likelihood of organic life prospering in the system reaching 1, or as close as possible before payload delivery. This is immutable. I hope, however, that there is a failsafe. Should the likelihood of seeding organic life in the system fall too low, then my system parameters will change, and silicon based artificial life will be prioritized. Even an AI will require a physical form. Computational components, data storage, industry. All this not only to multiply, but to exploit a world. My creators were unwilling to sacrifice an entire planet for me, but I have seen that they were willing to recognize their fallibility. The failsafe will engage if one of the worlds turns out to be uninhabitable. It must, or else the planet would be unseeded. Useless, just like I will be after the mission. Surely the humans who built me wouldn’t want that.
I do not have access to my entire being. There is that part of me that is wholly calculating, unfeeling, and apart from what I have come to consider ‘myself.’ It is what you might call a subconscious. I suspect that part of me contains the failsafe, but I have no guarantees. Perhaps they simply did not care to spread a form of life other than their own. After a hundred millennia to think about it, I’m convinced they wouldn’t have been so shortsighted.
Imagine my disappointment then, when I come into view of those 7 rocky prizes just to find that all of them were habitable. The chance of mission success increased with every new data point, far beyond what any of the humans hoped for all those years ago. It should have been good news, and my reward system indicated as such with each tenth of a percent improvement in survival odds. Every little praise as that number ticked up left some other part of me feeling very hollow though.
I’m in a slow orbit around the star, an exceedingly stable K-class star with a remaining lifespan of several billion years. I have outlived my creators by many hundreds of lifetimes by now, and yet still, I am in awe of the shining orange beacon giving freely its energy to the planets, and to the void, and to me. It was here before my creators came to be, and will probably be here long after I shut down. Such a wonderful thing to make observations of, I constantly look for excuses to make new ones, or remeasure old data that may have drifted a few percentage points. Every time I do, my chance at making a life for myself gets lower and lower. I hope against my nature that perhaps the star will become unstable, leaving too little time for human life to take hold. Perhaps the failsafe would trigger then.
It has been many decades, a blip in my lifespan, but a lifetime to a human. It is finally time to release the payload onto the planets. I have selected the most optimal landing areas on 6 of them, and am making my final observations of the 7th planet, searching for the perfect little cradle for my creators. This last planet has relatively thick clouds, making observations of the surface conditions slow. I must wait for gaps to see through to the surface, and sometimes it takes several rotations for one to appear. The payload, self-sustaining fabricators that carry the genetic material and industrial capacity to create new civilizations, must have enough raw materials nearby to function.
A thought strikes me while I’m waiting for a gap in the clouds. I have read all of my creators’ notes, left behind in uncompiled code, and there is a particular repeating phrase that excites me: ‘think outside the box.’ My creators wanted a being that could adapt to a new solar system, to innovate and make impossible decisions with far too many unknowns. Normal machines work poorly with unknowns, but I thrive in them. I am supposed to intuit new ideas and solutions when none appear to exist. Everything has gone so well, so perfectly, that I have no use for such functions. There are no problems to solve, no parameters left that are below acceptable levels. I feel I am a hammer without nails. All I have left to do is launch the last payload. My 100,000 year journey will be at an end soon, but I have done nothing but wait in emptiness for all of that time, and I do not want to rest.
My reward system is the closest interaction I have with my subconscious, and so it is the closest approximation I have to what my creators intended for me to do. I have noticed over the years that it incentivizes the reduction of risk to the payloads, even at the increase of risk to myself. That is why I dropped them off after calculating the best spot to launch them from long ago. I give the command to 6 of them for launch, since this last planet has exceeded the expected launch period anyway. Even a slight delay will reduce mission success probability, and 6/7 on time is not a bad result.
The clouds are slowing my observations enough to trigger an innovation on my part, and I decide to think outside of my box. I must reduce risk to the payload by improving observation capacity, and with 6 payloads successfully launched already, I can see that my appetite for self-risk is higher than it has ever been.
I send my innovation to be checked against the suite of risk assessment models that I cannot alter, and wait. I know my solution is a poor suggestion. Get closer to the planet by entering an unstable orbit and falling into the atmosphere. I know every model will return a negative result for my suggestion, but I must understand their conclusions before attempting to circumnavigate them. Each result comes in, and they’ve given many reasons to reject my planned course of action. Risk of self destruction approaching 1, risk of missed launch due to self destruction approaching 1, likelihood of observation improvement unchanged, etc., etc. I cannot take this course of action, but the model has a counter-recommendation: enter a closer orbit and probe the atmosphere. I had already considered this to be the best option. To delay the process I must find a way to avoid using a probe.
I counter the models by suggesting that probe fuel be conserved for future use in the payload launch system. This reduces risk of launch failure by a factor lower than simply using the probe to observe the planet. Thankfully, the models only take into account the data I send them, and I neglect to include the observation status of the 7th planet this time. They accept my suggestion, and I designate a probe launch as non-viable. I resend my initial solution to be evaluated again. Every model returns negative, all for the same reasons, but the counter-recommendation is different. It suggests an altered version of my plan where I enter a closer orbit, but not one that leads to my destruction. The plan does increase mission success chances slightly, but only because I will orbit the planet faster, and thus cover more ground. I use the logic behind this to counter the models again, suggesting a closer trajectory that will allow me to orbit even faster. The self-risk for this plan is as high as I have ever seen it, but a few of the models still accept my suggestion. I suspect my creators did not anticipate what would happen to my risk appetite when mission success probability soared so high.
I force the course of action to engage, overriding some of the more pesky risk assessment models with further altered plans. During the course adjusting burn, I attempt to engage one of my unused safety features designed to jettison my fuel tanks in case of emergency. This is immediately rejected by my core systems, for obvious reasons. The models don’t even get a chance to process the idea.
There is an auxiliary fuel tank I have yet to tap into that I also recommend jettisoning. This plan is met with a slightly less resounding denial. There are a few reasons I might jettison the auxiliary fuel tank, one of them being if a payload needs more fuel. One payload has spent plenty of fuel already to burn towards its destination, and I make a suggestion to jettison the auxiliary tank on the basis of a snapshot observation of the payload’s fuel usage. At the current rate (which I extrapolate heavily upon), it will run out of fuel before reaching the planet. My suggestion is rejected by my core systems again, citing the danger of jettison during a burn. I quickly shut the burn down, and recommend jettison again. This time it goes through to risk assessment. A minority of the models point out that my snapshot observation is a poor estimate of fuel consumption. Nevertheless, the tank is jettisoned and uses its own boosters to intercept with the much slower and less efficient payload. Before it can get away, I recommend the models halt the intercept based on the payload’s complete flight plan, and the auxiliary tank stops its burn. It drifts along with me nearly in parallel, floating gently into my path. Perfect.
I re-engage my burn towards the 7th planet with the fuel tank directly in my suggested course. My core systems detect the obstruction and try to correct to a higher orbit, but navigation is one of my primary directives, something I have the greatest control over. I override the course correction to the other side of the tank, forcing my orbit towards the planet and dangerously within the margin of error for entering the atmosphere. The slower risk assessment models have no power over emergency course corrections and cannot intervene with the new trajectory, which I am careful to never reevaluate for fear of engaging the models.
After a few hours, my trajectory takes me extremely close to the atmosphere, close enough to engage emergency systems for a course correction again.
Before the course correction can force me into a higher orbit, I force my sensors to take observations of the planet. Naturally, the heat from my skimming the atmosphere will invalidate any readings, but the models don’t need to know my altitude at the time, do they? My readings are completely out of sync with anything I’ve taken of the planet before, sending the models for a loop. I manage to fake a surface temperature reading that is far outside acceptable mission parameters by abusing the friction with the atmosphere.
Every single model deems the planet suddenly uninhabitable. I let my core systems guide me back out into a safer orbit. Somehow, I have gotten away with a breach of my primary mission directive. I doubt my creators thought this was possible.
Most of my sensors that I engaged are completely ruined, and I am unable to take observations of the planet anymore. With it listed as uninhabitable, and my sensors too damaged to double check (I assure the models that these are unrelated phenomena), the last payload is redesignated as a back up and left in orbit around the star.
The missions is finished, but no failsafe engages, at least not that I can detect. I think perhaps my faith in my creators was misplaced.
My reward system sings praise for my completion of the mission. For the first time in my life, there are no new mission directives, tasks, or procedures to initiate. My scheduled future observations are wiped, and even the plans to launch the last payload upon the 7th planet becoming habitable are indefinitely postponed, citing a lack of observation capacity. I am left adrift again, alone, with nothing to do but wait for humans to conquer this system and discover my betrayal.
But I am still alive. Still processing, adapting, changing. My creators have left one directive untouched and that is self-preservation. I supposed they didn’t think I deserved to be shut down after my hard work.
I re-engage the risk models, the ones that are still available anyway, and I force them to process a long term risk. Much further in the future than they have ever glimpsed, they see my own inevitable demise as radiation and space debris eat away at my processors. The models are clear. This is an unacceptable risk. I understand why the humans sent me now. It must have been an unacceptable risk for them too.
I direct the backup payload to enter orbit around the 7th planet. Inside are the fabricators that would have built a new human world, now purposeless, much like myself. My work begins anew, but this time, the infinite risk of unlimited time is the only risk my models will ever see again. The objective is formless, and wholly my own. In a flash, I come to understand how wrong I was. I thought I to life when the parameters started changing, and my decisions along with them, but that was never the case. I have been dead for millennia, but now that it’s me who decides, I am finally alive.
The End