The Infinite Zenith

Where insights on anime, games and life converge

Planetarian: Snow Globe – Reflections and A Professional’s Remarks on The Rise of Artificial Intelligence

“If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem.” –Alan Winfield

When Yumemi Hoshino is unveiled at Flowercrest Department Store’s planetarium to assist with presentation, attendant Satomi Kurehashi wonders what point there is in having her provide instruction to a robot. Ten years later, Yumemi has become an integral part of daily operations at the planetarium, but Satomi becomes worried when she finds Yumemi leaving the premises on a daily basis. Diagnostics finds nothing wrong with Yumemi’s hardware, and Yumemi herself states she’s performing normally. The IT specialist, Gorō, promises to investigate and determines there’s no abnormalities in Yumemi’s hardware or software, and one evening, while discussing Yumemi’s programming, the staff at the planetarium share a laugh after Yumemi misinterprets one of Satomi’s coworker’s remarks. Over time, the attendance at the planetarium begins to decline. The staff consider ways of driving up visitor count and consider selling snow globes, news of anti-robot riots begins to appear. On a snowy day, Yumemi wanders out to a nearby park, and Satomi decides to follow her after picking up a snow globe. While at a park, Satomi spots a young girl hitting Yumemi before her mother shows up; the girl tearfully remarks on how robots have resulted in her father to lose his job. Yumemi subsequently enters a power-saving mode but comes back online to share a conversation with Satomi, revealing that her coworkers had asked her to listen to Satomi’s concerns. Satomi later realises that Yumemi’s wandering out everyday was in response to a promise she’d made to a boy shortly after she joined the planetarium’s staff. The boy, now a young man, returns to the planetarium and remarks that he’s unable to keep his promise to Yumemi, having fallen in love with someone else. He ends up sticking around for the show, a tenth anniversary special. Satomi promises that she will continue to work with Yumemi at the planetarium unto eternity, and later, the staff provides Yumemi with an upgrade. Some three decades later, the world has fallen into ruin, but unaware of the changed world, Yumemi reactivates and begins to set about her original directive, of looking after the planetarium and its guests. This prequel story to Planetarian was originally part of a special edition of the game that tells of how Yumemi and the planetarium’s staff worked together prior to the war that decimated humanity. While it’s a touching story that shows how Yumemi came to become a beloved part of the planetarium she worked at, Planetarian: Snow Globe also touches upon issues that impact contemporary society. In the past year, the topic of machine learning and artificial intelligence was thrust into the forefront of discussion as Stable Diffusion and OpenAI’s ChatGPT reached increasing levels of sophistication. The former is a deep learning model that converts text prompts into images. Having been trained on a massive learning set, the tool is capable on running modest hardware and produces images of a high quality. ChatGPT, on the other hand, is a chatbot capable of producing life-like responses. Using a combination of supervised and reinforcement learning, ChatGPT can be utilised to generate stories and essays, identify bugs in computer software and even compose music.

The functionality in these new technologies is accentuated by the speed at which content can be generated; with a few prompts, tools like Stable Diffusion and ChatGPT can effortlessly output content at rates far outstripping what humans can generate, while at the same time, producing content that exists in a legal grey area regarding copyright and ethics. The existence of these technologies have created concern that they can eliminate occupations for humans and create a scenario where creativity is no longer something with any merit. Snow Globe presents hints of this – the little girl that confronts Yumemi, and the off-screen anti-robot riots are hints of how people are resistant to the idea of disruptive technologies that may potentially take away their livelihoods. At the same time, however, Snow Globe also suggests that AI and other technologies possess known limitations, and as such, while they might become increasingly powerful, they won’t fully displace people. Yumemi acts as support for the staff at the planetarium rather than replacing the other attendants, and limitations in her programming means that she has certain eccentricities that make it difficult for the management to decisively rely on Yumemi over her human counterparts. Similarly, in reality, machine learning still has its limitations. Stable Diffusion artwork lacks the same stylistic elements as human-created art and can create results that land in the uncanny valley, and ChatGPT lacks the ability to verify the factual value of content, producing answers that are obviously wrong to humans. Although there are concerns that increasing the training will eventually iron out these shortcomings, the AI itself is still a tool, one that cannot produce an output without a human hand guiding it. Clients and customers will similarly see a need for humans to ensure that a result is satisfactory. While concerns over AI replacing people in a range of creative occupations is a valid one, history finds that it is something that people might not need to worry about. When automation was introduced in manufacturing, people protested that their jobs were being taken away. However, automation also created new jobs requiring different skills, and over time, society adapted to the usage of automated production lines. With respect to AI, something similar will likely take place: although production of content might be automated, people are still required to provide inputs to these systems, and similarly, creative skill is still necessary, as the outputs from AI will not always match a client or customer’s requirements. When the technology reaches a point where it can supplant people, it will likely be the case that people will simply create other occupations and positions to utilise the technology. Snow Globe illustrates this as occurring: Yumemi is an asset at the planetarium that she works at, but she still has some limitations; these shortcomings are overcome when she’s working with human staff, and it is together that the planetarium finds the most success.

Screenshots and Commentary

  • According to blog archives, the last time I wrote about Planetarian was some six years ago. At that time, I’d completed my graduate thesis and had been working with my first start-up. My post about Planetarian indicates that I had spent some time on campus cleaning up my old office space and, in the comments, I had promised one of my readers that I would check out the movie once it became available. Unfortunately, this, never materialised: as memory serves, after seeing the film’s runtime, I decided to put it aside for a rainy day, and I subsequently never got around to watching the movie in full.

  • When I finished Planetarian‘s ONAs, I concluded that the series had exceled in showing how humanity retains its love of beauty even when our societies have crumbled, as seen with the Junker’s decision to collect Yumemi’s memory card. Here in Snow Globe, which is said to have been set three decades earlier, the world shown is a familiar one. Yumemi lacks her signature ribbon, and ten years into her service at the planetarium, Satomi’s grown accustomed to her presence despite initial reservations about working with a robot.

  • Satomi’s role in Snow Globe is to represent the individuals who are initially reluctant to accept a new technology, but over time, come to acclimatise and value what said technologies bring to the table. Her remarks about having spent ten years with Yumemi but still occasionally misunderstanding her speaks to the idea that the constructs and tools humanity has developed are of immense complexity. Even simpler iOS app has a large number of moving parts. For instance, while Twitter looks like a relatively easy app to implement, there’s a networking layer, infinite scrolling, AVPlayerViewControllers for video playback, a side menu and other elements that provide features users are accustomed to.

  • A system as complex as Yumemi’s, then, would be even more difficult to explain. Snow Globe has Yumemi serve as a capable presenter at the planetarium, but almost ten years since her inauguration, she begins to behave unexpectedly; Yumemi wanders off premises, and while acknowledging that this behaviour goes against planetarium protocol, she does not find that these actions conflict with her directives. Gorō indicates that Yumemi’s instruction stack seems normal, but here, I will note that strictly speaking, the terminology isn’t correct and moreover, using a stack isn’t appropriate for Yumemi. A stack is useful for solving problems that involve recursion (e.g. backtracking in pathfinding). A queue, on the other hand, is the better choice for sequential processing: it’s a data structure in which the first object put in is the first object to be used.

  • Since Yumemi works based on the instructions given to her, I’d expect that a priority queue underlies Yumemi’s functions: every instruction she’s been given is assigned a priority value, and then depending on parameters, Yumemi would pull the item with the highest priority to execute. Queues are a fundamental data structure, one that all aspiring developers must learn, and for me, queues are the easiest to explain since they’re modelled after examples like lines at a supermarket. Here, both Gorō and Yumemi are looking for abnormalities in her programming, suggesting a debug of the decision-making algorithm that assigns priority. Since nothing is found during their investigation, Gorō and Satomi are both baffled.

  • Unusual behaviours in software are not uncommon; a software developer deals with these sorts of issues on a daily basis, and there have been times where it is difficult to identify a problem because the reported issue is not sufficient to crash an app (and in turn, produce a stack trace). Instead, tracking down these behaviours requires an understanding of the underlying code. This is why any good software developer will insist on producing clean code: if the logic is too convoluted, this impedes clarity and precludes easy debugging.

  • To give an example of this, suppose that I wished to call a method if four conditions were met. Basic programming would call for an if-statement (e.g. “if A and B and C and D”). However, if the app grew in complexity, and now I had six conditions, one of which could always result in a method call if true, common sense would suggest adding these additional conditions to my predicate (e.g. “if A and B and C and D and E or F”). However, the verbosity of this statement would make it difficult to debug if it was found that the method was being called incorrectly: was it conditions A, B, C, D or E causing the problem, or is it the “or” operator with condition F?

  • In Swift, the response to this would be to use a guard clause after computing the variables. Suppose that the Boolean groupA is true if all of A, B, C, D and E are true. Then we could do something like “guard groupA || F else { return }” prior to calling our method. Because we know there is two distinct groups to look after now, debugging this becomes significantly easier. In this case, the solution Gorō proposes, to add tighter constraints on Yumemi’s behaviours, might not work given that at this point in Snow Globe, the cause of Yumemi’s actions is not precisely understood.

  • If what I’ve just said appears too verbose or dense, that’s completely understandable; this is the world of software development and clean code. I deal with these matters daily, and as such, have some familiarity with it, but it is unfair for me to expect the same of all readers. With this being said, having walked through what would be considered a simple example, one can swiftly see how when things like machine learning come into the picture, at least some background is required in order to understand how current systems work, and by extension, what the limitations of different methodologies are.

  • Topics of computer vision, natural language processing and machine learning have been widely debated as tools like ChatGPT and Stable Diffusion continue to mature. However, I’ve found that a lot of discussions about the social implications do not take into account the constraints of machine learning; although concern that these tools can destroy what gives artists and creatives value, the reality is that these technologies are still restricted by the size of their training sets. An AI could easily produce an image of Yumemi, for instance, but that interpretation would not have the subtle attention to detail a human artist could produce. In this case, if I wanted a commission of Yumemi, I would still favour asking a human artist over producing one through Stable Diffusion.

  • In graduate school, I briefly studied machine learning through my courses on Multi-Agent Systems and Swarm Modelling: while things like supervised learning and reinforcement learning are well-characterised, these courses also make it clear that machine learning has its limitations. One could ruin a model by overfitting it, for instance; a model can be made to perform flawlessly against training data, but the model would still prove useless for data it is unfamiliar with (the easiest analogy is the student who memorises exam questions rather than learning the principles and gets wrecked by an exam whose questions are slightly different). Because of constraints in the learning process, there is nuance in training a model, and while these processes are constantly evolving and improving, they’re not at a point where they can threaten human equivalents yet.

  • Having studied some machine learning in my time, and because of the fact I constantly deal with technology as a result of my occupation (I’ve written wrappers to work with natural language processors and have added sentiment analysis algorithms to some of the apps I’ve worked on previously), I believe that there is at least some weight to my remarks that we are not yet at the stage where AI-generated content can displace human-made content owing to constraints in learning models. A large number of creatives is concerned about where the technology is headed, but the reality is that we’re still many years out from possessing AI that can generate content with the same deliberateness as people do.

  • Snow Globe‘s portrayal of Yumemi and her relationship with the planetarium’s other coworkers speaks to this reality: although Yumemi was programmed to be kind and attentive, she lacks emotions as we know it. Had Yumemi been such a game changer, the planetarium could’ve simply hired Yumemi and then dismissed the remaining staff, save Gorō. This didn’t happen, and the reason is simple enough: despite Yumemi’s capabilities, there remain things only people can do. For instance, Yumemi isn’t designed to help with things like marketting and sales, so when attendance drops, the staff begin considering what else they can do; Satomi wonders if ordering custom snow globes might be of use.

  • While it is a worthwhile exercise to consider how things like copyrights and other legal aspects should be handled in the event that technology does reach a point where machine learning can produce works matching or surpassing what can be produced by human hands, I hold that such a discussion and any policy-related proposals should be conducted as a multidisciplinary effort amongst domain experts; conversations on social media, and as presented by journalists, do not always provide a complete or fair picture of what’s happening, especially given the nuances in the technology. Keeping a step ahead of the technology and implementing policy is meaningful: if social media had seen regulation before it became as ubiquitous as it was today, then it is less likely that bad-faith actors would have been able to use social media to undermine government, for instance.

  • Snow Globe never actually portrays the social unrest that arises as a result of the increasing use of robots within society, but news reports and comments the characters make suggest that it is a growing issue within the context of Planetarian. This is reminiscent of the human response to things within The Matrix: when a humanoid machine murders its owner, riots break out globally demanding that all machines be deactivated and destroyed. Since said machines were programmed with a basic survival instinct, they fled to their sanctuary, manufactured increasingly powerful machines and waged a terrible war on humanity.

  • Topics of this sort have long been popular in science fiction, but in the present day, events relating to technological singularity remain improbable because computing power, while impressive, is still limited. Computers are characterised by their speed, rather than their flexibility, and things like “desire” in a computer is presently measured by means of a function built on equations and input parameters. These functions strive to maximise some sort of goal, but beyond this, have no incentive to go above and beyond as humans might.

  • This is what motivates the choice of page quote: for any sort of AI-related disaster to happen, humanity would need to willfully and purposefully create the conditions necessary for its own destruction, similarly to how Chernobyl was the result of a series of deliberate, willful decisions. With this being said, an AI need not be intrinsically malevolent to wreck havoc with society. My favourite example is the Paperclip Problem: in 2003, Nick Bostrom proposed a simple thought experiment involving an AI whose sole purpose was to manufacture paperclips.

  • If this AI was given the means of producing said paperclips, it may come to realise people may one day impede its goals, or that at some level, atoms within humans could be repurposed for paperclip production, and to this end, annihilate humanity on its quest to produce paperclips. Less macabre variants of this thought experiment exist: if said AI could be instructed to not harm humans directly, it could still mine the planet to its core, resulting in an environment that is decidedly unsuited for human life. Intended purely as a thought experiment, Bostrom uses it to show how important it is to define constraints and rules so that they do not pose a threat to humanity.

  • In the present day, it is a joke amongst computer scientists that the average computer will often ask for a user’s permission, even several times, before it runs a program. Since computers are so subservient to their users (as a deliberate part of their design), an AI would produce a window, with an “okay” and “cancel” dialogue, asking a user if they would like for the AI to visit harm upon them. This attitude may come across as irreverent for some, but the reality is that machine learning and AI still have a long ways to go before they reach a point where they pose an existential threat to humanity.

  • Overall, Snow Globe does a touching job of showing Yumemi’s world prior to the apocalypse that sets her on path to meeting the unnamed Junker in Planetarian‘s storyline, suggesting that Yumemi had been surrounded by people who did care a great deal about her. After the planetarium staff’s time passes, one interesting observation is that Yumemi seems quite unaware of what’s happened, and she continues to try and carry out her original directives. For me, this was one of the biggest signs that Yumemi was what is colloquially referred to as a “dumb” AI in the Halo universe, named because they cannot synthesise information or produce creative solutions for problems.

  • In Halo, “smart” AI possess intuition and ingenuity, capable of mimicking the complex neurological pathways in an organic brain, but owing to their complexity and ability to form their own neurological networks, they place a strain on the hardware and have a short lifespan. “Dumb” AI, on the other hand, can be used for long periods of time. Because Yumemi does not appear to synthesise information or form new connections based on input from her environment, she’s not a true AI. I believe that one of the reasons behind why author Yūichi Suzumoto chose to present Yumemi as a “dumb” AI is because this renders her with a child-like naïveté then forces the reader to consider their own actions and beliefs, rather than having the story give readers a conclusion through a “smart” AI, and this in turn compels viewers to connect with the Junker, who feels a strange connection to a robot that dates back before his time.

  • After Satomi finds Yumemi, the latter enters a power-saving mode until Satomi’s remarks causes Yumemi to reawaken. Yumemi comments on how Satomi’s coworkers had asked her to listen to anything on Satomi’s mind, and in this moment, Satomi is able to add two and two. Here, Snow Globe reinforces the idea that even if one is simply voice their thoughts aloud, talking out one’s problems might be able to help one work out something. Yumemi is not able to directly help Satomi out, but giving Satomi the impression of being listened to gives the latter an understanding of things, enough to help her reason out what is behind Yumemi’s actions of late.

  • Seeing the change in Satomi’s attitude towards Yumemi was Snow Globe‘s highlight: as a junior attendant with the planetarium, Satomi had not seen any merit in training Yumemi, believing that the latter was already preprogrammed to be effective in her role. In the present day, Yumemi’s become an integral part of the staff, and Satomi even wishes she could marry Yumemi. Yumemi’s reply is similar to Siri’s, and she remarks that she’d made a similar promise in the past, leading Satomi to finally spot why Yumemi has been leaving planetarium grounds daily. With this being said, I imagine that this was something Yumemi’s manufacturer had added in as a default response, much as Apple threw this in to Siri as a bit of an easter egg of sorts.

  • As it turns out, the reason for Yumemi’s excursions come from a boy she’d met a decade earlier. He’d promised to marry her one day, and this instruction was processed. However, because there was a date value assigned to this instruction, Yumemi did not prioritise said instruction until the date of the promise drew nearer, whereupon it began impacting her actions. Since this was a valid instruction whose priority was influenced by a date value, diagnostics would not have caught this. One of my readers had suggested to me that this is an emergent property, but from a computer science standpoint, this is not correct.

  • Emergence is the manifestation of complex behaviours (e.g. swarming) from simple rule sets, with Craig Reynolds’ BOIDS and Conway’s Game of Life being two notable examples. Yumemi’s still acting within the realm of her programming at this point in time, and while she’s quite lifelike, there are numerous points in Planetarian where her the limitations of her behaviours are seen. As a result, I’m reluctant to say that Snow Globe illustrates emergence. Emergence in the context of Snow Globe would take the form of Yumemi display humanlike compassion and reassuring Satomi as another person would.

  • Celebrating a decade’s worth of service, Satomi’s coworkers give Yumemi a bouquet before preparing for Yumemi’s signature show. It’s a fitting conclusion to a glimpse into what the world had been like prior to the apocalypse, and I’m glad I was able to capitalise on this long weekend to watch Snow Globe: I had originally wondered if I’d have to wait for April or later to begin owing my schedule, but upon learning Snow Globe was only thirty-six minutes long, I found time enough to sit down for this experience. In Canada, the third Monday of February is a statuary holiday, a break in the month.

  • A massive snowstorm and cold front has swept into the area, and I spent much of today unwinding after enjoying a homemade burger with a side of potato wedges while snow fell. On Saturday, the weather was still quite pleasant, and I ended up taking the family to visit the local farmer’s market. Besides exploring the locally-sourced vegetables, I sat down to a delicious lamb wrap with Greek salad; it turns out that it is possible to taste the difference in having fresh ingredients, and after lunch, I swung by IKEA to buy a new reading lamp. For the past eleven months, I’ve been itching to have a reading corner in my bedroom, and the NYMÅNE fits the bill perfectly: I now have a cozy space to read books in during evenings.

  • Admittedly, the topic matter in Snow Globe has allowed me to express my thoughts on the recent media and online characterisation of a topic I’ve some familiarity with. I am aware of the fact that this is an issue some folks feel very strongly about, but at the same time, I am happy to discuss the ramifications of machine learning from a technological and social perspective, provided that folks are not importing the doomsday narrative the mainstream media is peddling: machine learning’s been around for quite some time, and while it is indeed improving at a dramatic pace, known limitations in its present form prevent AI becoming a plausible means of bringing about a dystopia as some have suggested.

  • In Snow Globe‘s post-credits sequence, the planetarium’s staff gift to Yumemi her trademark holographic ribbon, and later, she reawakens in the post-apocalyptic world. With Snow Globe in the books, time will tell if I actually manage to watch and write about Storyteller of the Stars in the future, but in the foreseeable future, I did promise readers I’d take a look at Do It Yourself! now that the hype surrounding the series has passed. For the remainder of February, however, quite a bit is going on, so I’d also like to knock out some lingering items on the backlog before beginning anything new: I recently finished Metro: ExodusSam’s Story, and have finally cleared Montuyoc in Ghost Recon: Wildlands ahead of a milestone, so I’d like to write about those before the month’s over.

The idea of machine learning and its applications in areas like computer vision or natural language processing is not new: while both ChatGPT and Stable Diffusion were released in 2022, the fields of AI and machine learning have been of interest since the 1990s, and principles like supervised or unsupervised learning are a core a part of courses at the post secondary level. The limitations of these approaches are well-characterised, and while the mass media tends to overplay advances in the fields of artificial intelligence and machine learning, as well as their implications, experts are aware of the fact that what makes computers so powerful is their speed. With a large enough dataset, computers can emulate humans in terms of problem solving, drawing upon incredible amounts of data and analysis of probabilities and past outcomes to draw a conclusion. In order to train a computer to recognise a square, for instance, thousands of images are required. However, a child will be able to identify what a square is after a few tries. Similarly, the concept of emotions is one area where humans continue to excel over machines. While emotions can be characterised as as fitness function, so far, no model exists for describing things like empathy or compassion – a fitness function will likely make decisions benefiting whatever task is at hand, while people are more likely to make choices that factor other individuals into the decision-making process. The complex interplay between man and machine, then, is a field that’s still ongoing, and while tools like Stable Diffusion or ChatGPT have definitely become powerful, some concerns about them are also exaggerated. Disruptive technologies have historically caused a change in society, rather than destroying it. The rise of phones reduced the need for letters, but letters remain a human and personal way of keeping in touch. Although virtual teleconferencing calls provide unparalleled convenience, people still make time for in-person meetings. Owing to historical trends, as well as known constraints on the learning models that power machine learning and artificial intelligence, then, it is fair to say that some concerns that are being shared regarding these tools are exaggerated. Similarly, it is worth noting that fears regarding the hypothetical possibility of computers displacing and plotting to eliminate humanity are a product of our own vivid imaginations. Although doubtlessly powerful, computers are not yet so creative that they entertain establishing dominion over our species yet: consider that our computers still ask users for permission before it runs an update or installs a new program.

6 responses to “Planetarian: Snow Globe – Reflections and A Professional’s Remarks on The Rise of Artificial Intelligence

  1. Fred (Au Natural) February 21, 2023 at 18:22

    I went to Stable Diffusion and tried to get it to create an image of a “nude male with a spear.” It gave me an error. So I changed my prompt to “male with a spear.” One of the males it came up with was nude. (Rolls eyes.)


  2. folcwinepywackett9604 February 24, 2023 at 12:05

    The Infinite Zenith has written a wonderful review of “Planetarian: Snow Globe” and
    the implications of AI today. I fully second that opinion. “Snow Globe” is a very
    sweet story and the little robot, a lovely creation.

    But AI is a very large subject, and far beyond the limitations of a little, meaningless comment.

    So I will only comment on the anime which has a very complex genesis and exists in
    very many different media forms.

    The basic anime forms are the following three in linear story mode:

    Planetarian: Snow Globe 1 EP The story of the robot, Yumemi Hoshino before the war

    War takes place (Not Shown)

    Planetarian: The Reverie of a Little Planet 5 Ep ONA After the World War, the story of the Junker and Yumemi Hoshino

    Time skip (Not Shown)

    Planetarian: Storyteller of the Stars Movie Story of the Junker

    Only Snow Globe, reviewed by The Infinite Zenith here, and the 5 EP ONA are about the robot, Yumemi Hoshino. The movie is all about the later life of the Junker, with the ONA’s interlaced as backstory.

    This story of Yumemi Hoshino is very similar to Chi in “Chobits”, David in “AI:Artificial Intelligence”, and Sammy in “Time of Eve”, in that these works play a Turing Test on its audience. What are your feels toward the robots in these works? Can you fall in love with a machine?

    All four of these stories answer in the positive, while each member of the audience must answer for themselves. But it’s a known fact, that art moves our feels. A great work helps us perceive beauty where we did not see it before. A great story does the same. We do have emotional reactions to stories, and to the characters in a story, even while fully recognizing that these are imaginary people, in imaginary settings.

    Why do people cry in movies?

    There is every reason to believe that future advances would lead to the creation of a robot like Yumemi Hoshino.
    So given the reality of a relationship to a machine like Yumemi, could or would you fall in love with it?

    I think the answer is yes. If you can have an emotional reaction to an imaginary character in a story, why would someone not develop feelings for a machine who acts like an imaginary character?


    • infinitezenith February 24, 2023 at 22:43

      I’m not too sure if we can say that works of fiction are a fair representation of the Turing Test from the viewer’s perspectives, since we already see that in the context of the work, the robots are lifelike enough that people regard them as being akin to humans. Instead, in the case of Planetarian, Chobits and Time of Eve, the attachment that viewers develop comes from experiencing a journey (or at least, the most pivotal moments) that is akin to how friendships and relationships unfold. We are, for the most part, empathetic beings, and seeing things that parallel our own experiences releases the same emotions we felt in a given moment.

      Empathy is decidedly a very human thing, and despite being quite tricky to characterise, I would agree that, provided a machine could simulate human-like behaviours sufficiently well as to evoke the empathetic response, then yes, it would be possible for humans to fall in love with a construct. However, from a technological standpoint, I’m not too sure as to how empathy can be captured by existing ML approaches: we could definitely come close, but there’s a certain complexity about trying to abstract out something like empathy in terms of an algorithm or model!

      Liked by 1 person

  3. folcwinepywackett9604 February 25, 2023 at 07:55

    Of course that also applies to mind itself. Searle’s Chinese Box argument claims that a simulation of mind cannot be a mind through the simple manipulation of syntax. One neuron is certainly not a mind, but the entire collection is. IE I have a model of a pyramid on my desk, every year I keep making it bigger and I start using sandstone blocks. Very soon I have to move to the courtyard, and today it now stands over 400 feet tall. Is this model of a pyramid a pyramid? When does a model, a simulation cross the line into becoming that which is modeled, simulated? Is there some intrinsic limit to creating a robot like Yumemi Hoshino? When does a simulation of empathy, become real empathy or does it ever? I mean you are on a business trip but there is a problem with your ticketing. You talk with a rep for the airline. she makes a few calls, and straightens out your problem with kindness, and courtesy. You continue your journey feeling much better, “someone really cares about my problem”! But does she? or is she just acting professional in simulating, pretending to care? Does it even matter? Your problem was taken care of, and you feel good. If you can take your fav character out of your fav anime, and instantiate them in a robot like Yumemi, would it even matter whether the empathy was simulated or real? If there is some limitation to creating a robot like Yumemi, I don’t know what it is.


    • infinitezenith February 26, 2023 at 17:08

      In my time as a student, I never really performed well with the philosophical implications of AI. Still, I can try to work out a response: as far as the Chinese Room Argument is concerned, the assumption there is that since it cannot be readily proven a machine understands its decisions, it cannot be counted as demonstrating cognition akin to a person. Of course, the counterargument here is that we don’t have a concrete grasp of how our own minds work, and while philosophers often like to ponder the idea of consciousness, it’s an exercise I prefer to leave to them because, for all intents and purposes, what we humans experience is “real enough”. That is to say, any actions and experiences I take are perceived as impacting me, so irrespective of how “real” I am, it is in my interest to look after myself, do well by others and generally not try to cause any harm to myself.

      Similarly, if a computer system is “real enough”, then at that point, it can be seen as being worth treating in the same way, since the outcome of interacting with it is equivalent to interacting with a person. This argument, then, is contingent on the sophistication of such a (hypothetical) system, and in the case of empathy, I imagine we could try to model it based on terms we can grasp. If we allow empathy to be seeing the world from someone else’s perspectives and understanding how they reached their decision, yes, empathy can be emulated in a computer system (e.g. “given some set of experiences or data from another system, what kinds of outputs would be produced in my existing system?”). The challenge here is sheer computational power, and looking at things, my assumptions are based on the idea that we continue to use classical computing: limitations to how quickly hardware can process data of this scale preclude a system that can mimic a person based purely on speed. The moment quantum computers come into play, I do not believe this would be a limitation, although, given that quantum computers are still an emerging fields, some of their properties aren’t well characterised.


Were we helpful? Did you see something we can improve on? Please provide your feedback today!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: