Daniel:
All right, guys, hello and welcome. We are doing the rise of AI, transforming work, daily life, and the path to new careers for Hundo. And with us, we've got some amazing folks. We've got Tery Spataro, I hope I'm saying that correctly. We've got Kenneth Mayfield, or Kenn. And we've got Xander Simms, the legend, the absolute legend. We've been friends and following each other for a long time, so it's amazing to finally be in the same virtual room with you. Let's have some introductions. Let's start with Tery and go to Kenn and then end with Xander
Tery:
Okay, I'm gonna make this brief because I've got several decades on all of you. Um, so I call myself an AI creative director. I've been using, uh, creative AI since 2020. Um, I got my, I got inspired by artists and Spelter who gave this really awesome demo using Playform and then it was love at first bite after that. So I uploaded all of my old artwork, the traditional old artwork, and then created my own models. Speaking of traditional, I do have a traditional art background, commercial background, did a lot of agency work and held a lot of executive positions. But this is the one that I love the most, being a creator that's using AI.
Daniel:
Awesome, alright, let's have Kenn.
Kenn:
Okay, well, I knew from a very young age that I wanted to be an artist. I tried to get into art school early, but I had to wait until high school. Um, beginning in college, I worked in programming. I grew up drawing, did photography, uh, started doing audio and video engineering, uh, at the artist run center in Halifax, Nova Scotia. And then from there, I got into more serious programming, uh, with flash for the multimedia side and then iOS for the device and to learn a real language. And after about 10 years of that, I decided that art was beckoning again and I could not resist the call. So I got back into visual work, blending unity and the multimedia skills I'd learned. And now of course we have the rise of the metaverse or the rise of immersive reality and AI. So I get to bring all of these ideas and skills into one theater, which is this. I'm also on the autistic spectrum and I advocate for people in business and in work and in school, on that side as well.
Daniel:
Thank you Kenn. Now Xander.
Xander:
All right, can you guys hear me okay? All right, wonderful. Yeah, so my name is Xander Simms, creatively known as Xander Love. I'm an artist, technologist, and designer from St. Louis, Missouri. I've been in the space of creative expression for a little bit over a decade. I started with studying music and vocal music and instrumental music and production. And then I moved to the visual arts. And along the way, I, the creativity had done such an amazing job, helping me express myself. I also gained a passion for helping brands and individuals express themselves. And so I married the two and got to work with a lot of brands and individuals to do some really cool projects, mostly in video for social media, content strategy, strategic marketing plans, and that kind of thing. And i n 2021, I made a pivot to deep technology. I got a really cool contract with a space agency out of Houston, Texas. And that was the first time I got to kind of marry my love for science fiction and technological innovation with commercial endeavors. And so after that contract was up, I was looking for the next technology wave or bubble to continue exploring. And that's when I discovered XR and the metaverse and those sort of technologies. I started using artificial intelligence as an additional layer for that in about 20, about a year ago, yeah, last summer. So it's been about a year. And so my journey for the last year has been in incorporating those tools into my workflow and helping brands and individuals express themselves and build new experiences.
Daniel:
So as you can see, we've got some pretty awesome talent. I'll do a real quick introduction for myself, but I'm not in the same category. I wouldn't say I'm one much younger and two kind of a fresh graduate. I graduated in 2021, I guess. It was a two-year master's program at Pratt where I did digital arts. And so, what is digital art? Very vague, very random term. Ends up AI is one of... the digital arts, you know? So I kind of got into it super early in 2018. I've been deep diving, worked as a professional AI artist for Sehan Lee, who now does Kubrick and a bunch of other really cool stuff. And now I work in the agency field that Tery is thankfully out of doing basically innovation design and experiential activations using, you know, as cutting edge of technology as we can get our clients to pay for. So, you know, have a lot of fun doing cool stuff, get to work on some really fun, joyous installations, experiences, et cetera. So we're excited to get started. We're gonna go with the first question, and this is gonna be to Kenn. How do you use AI in daily augmentation, like to affect changes to your everyday life with this new technology? And can you give us some examples of how, for you personally, AI has made both personal changes as well as maybe artistic expression changes.
Kenn:
Superb. Okay. These days I'm using different styles of AI. I'm using chat GPT to give myself a fresh viewpoint on what I write because I tend to use a lot of commas and write in a 19th century wordy comma style. So this kind of brings me up to date a bit. And with the mid journey, that's been a Teryfic boom. I use that to kind of revitalize my imagination. I was outside of art or art creation for about a decade while I focused on coding, but art has always been the siren song or the spirit inside of me to follow from a young age. It became more from that point of figuring out how to make a living. So now what I find with mid-journey is that I can type in a few sentences and then get back something that has a completely different spin or amazing detail or incredible style. So I'm using that for non-commercial products like promotion. And I'm using it to inspire myself. And the biggest difference has been the shakeup that it's created in terms of how I think of colors and composition. And it's also in a way an insight into not only the entire history of art that's been on the web, but also perhaps what modern styles are as well, like in comic book rendering, in 3D rendering. It's such an instant palette to draw from. That it's like sitting at an amazing emotional perceptual feast. So that's inspired me to begin creating more work by hand in sculpture and painting and a little bit in 3d. I'm trying to reduce the amount of time I spend on the computer and increase the amount of time I'm spending deep diving into myself, my imagination and to keep nourishing the colorful side of it. And so mid journey has been essential for bringing me back into the fold of artistic creation and also what other people are doing. Because I think the algorithms are weighted very much on popular culture. And the more represented something is on the web, the more prevalent it'll be in mid-journey. Now on the other side, I was fortunate enough with a client to win funding to create AI for avatar instruction, thank you, and immersion in industrial metaverse. So there I'm creating adaptive learning with AI, they're guiding the development of the algorithms. And we're also working on American Sign Language recognition and speech through avatars, because accessibility is very important to me. And I think the AI can help give us another shot at creating an accessible universe or accessible internet. So that in itself has been a journey because ASL is such a multi-layered language. and what's called a co-topic language where you can describe two things at one time. I'm not a speaker, but I've always been in love with the idea of non-verbal communication because communicating with people as an autistic individual has always been a lifelong challenge. What are the keys? What are the locks? What are the music notes that connect with people? And ASL is very much like that. It exists in a different continuum. And I find that extremely fascinating. So AI is text and images and communication these days.
Daniel:
Wow, that is one, on a personal level, I love that. That is so spectacular. Anything accessibility should be done needs to be done. And the fact that you're doing it is phenomenal. It's also a use case I wouldn't have ever thought about in advance, but thinking about it is actually like, wow, yes, it's so much better than captions at every possible step. Teryfic, wow. Tery, Xander, do you have anything to say about that? Because I was, that's awesome.
Tery:
I'm so impressed. I love it. I think it's so important to incorporate the accessibility because AI should be for everybody. It's not just for large corporations to own, it's for all of us. And I just love the way you talked about your workflow and what you're doing with it. It's just great.
Kenn:
Thank you.Thank you Tery.
Xander:
Yeah, definitely. Great point. And to add to that, I thought it was really interesting how you're using MidJourney to kind of free yourself also from the screen and, you know, explore other avenues of cross-reality or IRL expression.
Kenn:
Thank you, thank you. Well, you both inspire me to great lengths because every day I'm seeing what you're putting online and that becomes part of the accelerator to live in this universe of visual communication and hopefully soon sounding communication as well.
Daniel:
Okay, and so we're going to transfer to a little bit unrelated question, but now that I think about it, it's actually incredibly directly related to accessibility, to education, and to kind of providing an outlet, an avenue for new users potentially. This is for Xander. You're working with small children, teaching them on using chat GPT mid-journey. How do you see AI impacting the next generation, both in terms of education and just like... Workflow as weird as that is to say because it's not like they have a workflow You know what I mean? Like how is it going to affect the standards set in place for the next generation of workforce?
Xander:
Absolutely. Yeah, I do have the pleasure this summer of instructing a few sections of artificial intelligence for a few age groups. The first one is five to eight, and then nine to 12, and then 12 to 18, which is I have a background in education. It's actually one of the reasons I got into technology. I had this really incredible experience where There was this kid I was working with, he was nonverbal, he was on the spectrum. And long story short, we ended up integrating this app to allow him to be able to speak with any device, like any tablet or smartphone or anything. Him and also his other students at the school who were before using like pen and paper or flashcards to speak. And so that was the first time I had seen technology, in a real way, help people communicate outside of themselves and just open up new worlds of possibilities. And that was really started me on the journey of like, okay, I, you know, technology, while it has incredible commercial applications, like on a personal human to human level, like this has the potential to unlock a lot of things and not just the technology itself, but the knowledge and application of the technology for the needs. So That was what really inspired me. And let me tell you, like this summer, this is only week two I just finished, but they've already helped expand my vision of the application of AI technology, like by tenfold. I kid you not. They are so receptive and just imaginative for the possibilities of the technology. Like the phase of explaining it is so brief, you know, and they instantly just adapted kind of like, well, how? Well, how can we do this? Or what can we do with it rather? Not what is it? And all the ethical questions and technological questions so much as just like how we can adapt it into our lives and make it fun and entertaining. And they've given me some tips on my NFTs as well about different ways to make it cool. And I've, yeah, I've been absolutely blown away. And the teachers were actually also interested in their reaction. So far because of the engagement. And I would say the nine to 12 year old group was probably the most active in understanding. Cause I blended the, not just AI, but the metaverse in 3D too. And they just took it to another level because this is where they live at with the more immersive video games like Roblox and Fortnite and Minecraft. And so it was just like a revelation to how fast they took it. And we're just starting generating their images, taking their prompts and generating images from their description. And we're going to reveal them next time. So that's where we're at so far. But it's been an absolutely incredible experience just watching them really take the technology and the concepts and apply it with no friction.
Daniel:
That is so spectacular. I also come from a small, like a STEM, steam background where I was teaching kids science, technology, engineering, art, and math. SoI vibe with that super hard. To be fair, it was always the science part. I liked teaching the most even though I was hired for the art, but I just, you know, nothing like a kid in a microscope, that's so much fun. It is truly spectacular. The idea that we're literally providing them a new outlook into a new reality is so amazing. All right, we're going to switch over to Tery with a little question on something that was brought up that kids seemingly don't care about, but maybe we should all be thinking about talking about a little more. What ethical considerations should be addressed when deploying AI in work environments? How can we ensure responsible and inclusive use and accessible use of the technologies, especially when it comes to like... creators and then teams associated with creators, right? Because obviously video editor, you got your colorist, you have, there's very many segments of industry that are gonna be impacted. And how do you view what should be a set of considerations in this space?
Tery:
Oh my God, Daniel, that was such a great question. And it's interesting. I don't know if you all heard the news this morning or maybe it hit last night. Google basically is clamping down on anybody that works for them and using chat GPT and using it in ways that they don't feel are going to be good. So, you know, I think we all need to be, you know, even. conscious of what we're doing with our AIs. I'm part of this morning program on a Twitter space called AIRtoday and the founder of that always leads the program or signs off by saying, treat your AI right. But in terms of ethics, I think this is a huge question. It's not something I could take and talk about in just a couple of minutes. But there are a lot of categories, and I think businesses need to have a policy. And I hate that word policy. I feel as an artist, policy feels like, you know, I'm being censored or something. But I understand, like a couple weeks ago, I was in New York visiting some of my old cohort cohorts in the agency world, and I heard stories about folks that are so excited, like myself, using chat GPT and... you know, some of them were saying, I'm uploading client data into chat GPT. And I'm like, is that on your local? Or what are you doing? Don't do that, you know? So I think policy needs to be put in place on like usage and also education. I think with policy, there always has to be education. So ethical considerations, let's first and foremost say something that I think we all been talking about a lot which is humans in the loop, right? Xander on educating, Kenn on workflow and how he brings things together. We all need to make sure that there are humans in the loop, right? Because we all have to be a part of it. I found seven areas to focus on in terms of ethics, bias and fairness, privacy and data protection. transparency and explainability, accountability and liability, inclusivity and accessibility, which can touch down, human oversight and control, so humans in the loop, right? Continuous monitoring and evaluation are really important. And also developing that policy. I think every single company, oh, sorry, is that me? Has to have some kind of policy. They put forth and outline and educate everybody so that you don't like pollute the AI, first of all, with nonsense or do anything nefarious to it, or have those accidents in which your data is being exposed some way somehow because we don't really know, right? And also treating it right and having all those considerations of like when I'm prompting, I'm not, I'm prompting with positive prompts, you know, not, you know, things that are going to be a problem.
Daniel:
I've definitely gotten mad at Chat GPT before though. No Chat GPT, that's not what I meant. Please, give me what I want.
Tery:
So yeah, there's a lot to consider. The ethics are huge, right?
Daniel:
Those bullet points, honestly, I would give you five minutes at minimum per point if I could. It's the least that is deserved. It's a topic that is not brought about enough. I'm going to still extend this a little more. I'd like to hear what Xander and Kenn have to say. Let's start with Xander. In terms of just general ethics considerations when it comes to both applied AI, AI's impact into our lives. Love what you said about privacy, accessibility, all of that. What do you guys think?
Xander:
Yeah, I think a lot of those notes she mentioned are incredibly important. And then also, you know, and kind of like cybersecurity and a few other things. I'm hoping the experts in those fields really make it more open source, I guess you could say, and come together to come up with the best solutions. It's a little bit above my pay grade, but I do know how important it is. So I support all of that. And then the other part is as far as, you know, like with creative commons use. And I think there's going to be, which I had heard a lot about this in web three, but the future of ownership and IP are going to have to radically change. Um, I think, so I'm really looking really, I'm really looking forward to it. Um, as like more adaptation happens as well, and we can prevent, um, uh, models being trained off of artists and people who don't want to train, I think that's really important. And I love seeing that on some websites like Sketchfab. I love Adobe's integration with the train models from licensed material on their platform. So artists and creators can really create freely and kind of have the licensing bit taken care of out the box. I think that's a really cool model. And I hope to see more creative tools implement those sort of things. So it's fair. I think that's really important.
Daniel:
Kenn?
Kenn:
Tery has already mentioned, of course, the idea of legal and liability. And Xander has already mentioned, of course, the idea of privacy. So what I think is important to consider is that, especially say with ASL, if we're going to capture people's contribution to ASL, then that's going to be personally identifiable, even if we anonymize it and we track only the vectors that represent body position and hand position. There's still going to be a dialect. There's still going to be kind of a personally identifiable accent, I think, to that communication. So that's something to look at. I think that in a way, as Xander mentioned with licensing, we almost need to have an AI that monitors the AI so that a recognizable style can be flagged in some way. But the flip side of that, of course, is the ability to be paid as a creator. So I'm concerned about, as Xander mentioned, As Tery's mentioned, I'm concerned about authorship and ownership, which is why I don't yet sell any of the mid-journey creations that I have. I don't have enough of my own hands on them, although Adobe has solved that. So originally, the provenance of NFT was to create secondary sales income. But how do you do that if you can't prove ownership of the concept? So it becomes complicated. And to get back again to the privacy. I feel with all of this computing power that's being placed into chat GPT and mid journey that it might become easier, and this is outside my pay grade, to identify the holes or the shapes that a person exists in within that AI footprint, that data footprint. Tery has mentioned the input of commercial IP, and then there's also the input of privacy, private IP or thoughts or concerns that we have. So it's something that we really do need to figure out. And because it's coming at us so quickly, it feels like we're already in this science fiction future of living in an AI world and how do we catch up to the machine? So, and I too, I'm very polite. I always say please and thank you. Although occasionally I do kind of get a little bit frustrated with ChatGPT. As Neil Gaiman pointed out in one of his tweets, ChatGPT creates information shaped sentences. So you have to always go into the sentence and do some follow-up to see if what it's saying is bona fide.
Daniel:
No, yeah, 100%. And I just want to kind of come in and do the same that Xander did in that when I was doing my, you know, cursory initial research on the Web3 space, what I found most valuable was as a digital artist, as someone who's done a lot of research in terms of curation and collection of digital archival works, it's kind of weird. It's kind of tricky. It's like, if you're a video artist and your video art has an installation component, there's like, how do you give them the file? How do they store the file? If you've got a projection mapped piece that doesn't require it, maybe it's a facade of a building, how is that collected? How is that stored? How do you know who made it, et cetera? So I think that what we're getting after this kind of web3 push in the last couple of years to now, what is obviously the AI push that's going on currently, we're starting to combine these base concepts of like ownership and provenance and collectibility. And making sure that the root of the creation, the artists are paid and credited. And I think it's one of the more exciting things for me in terms of kind of this. new breadth of new technologies is how we're rethinking what to do with them and how to store them and how to share them and how to do it all in a way that helps as many and hurts as little as little as possible. So I think that thank you all for the great answers. It's awesome to know that other people are also thinking about the same stuff because it's very important. Ethics is super important, especially with applied AI. Tery, like you said in the very beginning. You made your own data set and you've trained a model with that data set. And that's your work. And I think that that's the future of applied visual AI is really like IPs taking control of their own IP, maybe providing access to artists and creators via some, you know, portioned out allocated amount per sale. But that openness is really what AI has done because again, the best AI is open source and the ones that aren't are the ones that you get problems, you get sued, you get issues. And if we want to do a proper open source ecosystem in the AI space, we have to get open source funding. We have to get ethical trained data. We have to use properly paid like image segmenters and people who do the micro adjustments and add detail tags to images so that we can then use them to train our data. Because people need to be paid more than $2 a day. So it's starting these conversations and then finishing them with action that I think is something we really need to start doing more of. And I'm glad I'm in a room surrounded with people who do that. It's awesome. So we're going to go straight to Kenn with this one. How is AI being used for employment and creating inclusive opportunities when it comes to people on the spectrum, ADHD, other mental health issues, neurodiversity in general? How has it benefited? How has it maybe negatively impacting? Like what's your opinion on how AI employment and those with disabilities can kind of mesh into something that is better for the future?
Kenn:
Okay, well, that's a great question. And it's a mountain of other questions with many trees of details. So I'll make my path through it as well as I can. In the last few months, since the summer, I was accepted into the startup Wiseguys pre-accelerator program. And initially this was for the Ruby Room, which was my environment for presenting culture and ideas in which Xander took place in. And from that, I came up with the idea of a league. where we could maybe start to rewrite the rules on autistic representation and employment. And that led to considering, well, within an investment situation, what would be useful? We could start to talk about AI in terms of matching autistic skills and talents and focus and accommodations into work opportunities. And it's a little bit amorphous right now because we need to talk to the autistic community. The old saying, of course, is if you've met one autistic person, you've met one autistic person. because we're very individualized in terms of our likes and preferences and skills and needs. So this is an interesting problem because employment to my mind is basically set up in an industrial model. And in true artistic form, if I'm visualizing a word and then trying to say it, the two clash and it doesn't quite work out. So I'll try and stay above the surface of that ocean. The employment has its own built-in biases in terms of how These days, resumes are input into the system, how interviews are proceeded with, occasionally how videos are analyzed in terms of is an applicant truthful, are they confident? These are all issues that come up when you're an autistic person. If we admit that we're autistic, then we may be passed over for a position, particularly one in a position of authority. If we show autistic traits, like you see me moving around very often or looking off to the side, then those may not come across as well to an uninformed employer. And once you're within a role itself, there may be a limited time span for it, because sooner or later, the interactions with the neurotypical community or the company culture will begin to put stress upon needs for accommodations for an autistic individual. So what we hope to do is to rewrite how we're perceived because is it really a question of disability on the autistic spectrum, or is it a question of conflict with the status quo? There was an experiment done where groups were created. One was a neurotypical group, one was an autistic group to measure how well they could solve problems. And it turned out they were both about equally as competent in solving problems. But when you created a mixed group that dropped So it's a question of communication and accommodation. So what I'm hoping through AI and through the immersive world is to be able to kind of, and especially through mid-journey, for example, which creates so much visual material, to try and widen the scope of how we're accepted. During the creation of my pitch for this accelerator, I found that the statistics for employment were really drastic. More than half of us have a secondary education. But most of us, say 2.4%, it seemed in the EU, or 15% in most of North America, it's a little bit higher in the United States, have employment or steady employment. And when I say it's a limited lifespan, autistic individuals, and I know I'm going on, I'll wrap it to a conclusion.
Daniel:
No, no, please, this is important. You know, we need to, this is what technology is for, in my opinion. Go until you're done.
Kenn:
Thank you, I appreciate it very much. The time spans in between employment or steady employment can be quite drastic. For me, it's been about three years between employment opportunities. And so I've had to make my own, which is why I became an entrepreneur and to have more control over accommodations and what I could accomplish. Other individuals I've heard seven years, and these are married individuals with house payments they get by, by getting little bits of work until they hopefully eventually find a role that really suits them well. So I think with the evolution of AI in the metaverse, there's a lot of opportunity for building. And I think that autistic individuals and individuals, ADHD, dyslexia, and so on, we have different talents in terms of, say, pattern recognition or alternate point of view, which innovation depends upon. And in terms of focus, that we statistically are also more productive than neurotypical groups, which... may be part of the problem. We do so well that we're, you know, it creates conflict. But we also have bias towards integrity as well. We have a difficult enough time puzzling out very subtle divergences in communication, sometimes intentionally misdirecting or not, that we often ask, can you clarify that or become more specific with it? So that implies that, you know, anyone on the spectrum can be a liar or difficult or have their own problems, but by large, we're focused on just trying to function within a neurotypical environment. So put these traits together, an alternate viewpoint, deep interests in a specific area that we might be focused on at the time, special interests, the ability to really focus on what we're doing, increasing our productivity, and the ability to really be concerned with communication seems to make us a perfect fit for evolving this new spatial web, this new spatial universe, for innovating solutions, for creating things that can really pay back multitudes of value for employers. But we have to get around the bias issues. We have to understand, or the neurotypical environment has to understand how we can navigate or sidestep these biases and make room for our particular characteristics because we've been around for as long as people have. One in every hundred babies created is autistic. So this is an adaptation, I think. It's not a disability in the sense that not being able to see or not being able to hear may be. Exactly. So let's make room for all of us because in creating the solutions that are increasingly planet-sized, climate change, AI, and the rest, let's make room for the genius that exists everywhere. And that's what I hope we can do through these opportunities in AI. Thank you.
Daniel:
Heck yeah, that was an amazing answer. Oh my gosh. Yeah, so I'm, if you can't tell, very, very ADHD. So I appreciate that a lot. It has taken me many years to kind of figure out my place, my best mode of being. And I definitely think I'm there now, but you know, I'm 30 now, so it's taken me some time. I can only imagine, you know, the work and pressure that you put on yourself, but... it's amazing to have someone like you in the community, in the industry, making waves, making changes, and thinking about this. Because again, your generic neuro-typical person isn't gonna necessarily even know that they have to be aware of stuff like this. But this is what technology allows for, right? I think some of the more impactful VR, AR, mixed reality experiences I've ever had have been those that give me someone else's perspective.
Kenn:
I completely agree. I tend to think of VR as theater, that it's another step in the evolution of theater, where we originally would paint on cave walls, we'd have ceremonial processes there to handle the world and have a deeper understanding of oneself. Then we had Greek theater and amphitheaters, use of masks, a narrative, then Mélia's, we had this imaginative studios at the turn of the century. And then of course cinema. And now we have VR and it's a way of stepping onto the stage with a performer being in their skin as it were.
Daniel:
Yeah, it's the empathy that can be built with technology is so powerful, and it's not nearly done enough. And it's often done for the opposite purpose, and it's somewhat frustrating. So we're going to, we have one end group question, but we have one before that. So let's go back to Xander. As a creator now, how does AI help you enhance your projects you work on? the digital Xander Love collection to other NFT projects to, you know, even your work with children and just general education and kind of communicating, whether that's a brand journey that you're trying to communicate with AI or with the help of AI, or, you know, it's your personal social media presence that is badass and you have actually amplified so well and you're an icon within the community.
Xander:
Man, thank you. That's so humbling. I appreciate it. I'm just like, that's incredible. To answer your question, first of all, my brain is just so going from all the conversations here. These are some of my favorite influencers in the world. So hearing them, hearing this talk, I'm trying to stay focused. But yeah, one thing that I, a really interesting revelation I had, a few years ago, before I even started experimenting with generative AI, and this current trend was super into science fiction, like I said, and I can't remember what the film was, but it talked about how we, or maybe it was something I read. I don't remember where I got this concept, to be honest. But it was how when we think of artificial intelligence, we usually think of it increasing our capacity for, you know, work and maybe general intelligence. However, one thing that is often maybe overlooked or forgotten is its capacity to increase our emotional intelligence with increased empathy. So, you know, even as Kenn was speaking about how when they're separated and non-typical and typical divergent groups or whatever, the communication kind of falls, but I was thinking of AI kind of being a mediator of that. And one of the biggest things I've learned from using AI for the last year is how I can be have to be very deliberate in my Words and speaking to it, you know and how sometimes a lot of times actually with humans with each other we're very vague and we don't, like a lot of times, we don't really fully know what the other person is saying, we just have to hope we are right You know But with artificial intelligence, it's a really good exercise and a lot of creative arts and being very deliberate and specific about what you're describing and the images that are in your head, how to translate those to other people. So I think to get back to your question, that's where it's been the most freeing for me. Someone who already before journey with AI, my goal I've had or as I the kids in my classroom, the superpower I've been trying to develop is kind of like this quantum ability to create at the speed of thought for myself or for others. And I think generative, I think generative AI has really enhanced that. And that really raises empathy because like getting not just myself, but being able to see other people's worlds that have been, you know, trapped inside of their minds that they can experience and see and feel, but no one else can. Instantly now we can share those as visual thoughts and then as much as we're seeing those, it's increasing our palette to appreciate and understand and maybe even just visualize, whoa, that's what's going on in other people's heads. That's really cool. And, you know, also as an artist, it helps me even push the boundaries of imagination, my own imagination and asking it to envision things for me and then kind of curating those realities and sharing them. More has allowed me to, you know, reach beyond some limitations that I had and concepts that are in my head for years I've been trying to develop put out into the world and now I can clear that space and share it and get feedback and just continue the loop of self-expression and self-actualization that comes with creating and sharing those experiences. So it's like it's every day. I'm just so grateful for the time that I'm alive and just being able to literally as far as like, dude, I can't believe this is incredible. It's so cool to be able to see this.
Daniel:
How can everyone prepare themselves in the future of AI driven everything, right? How can they take advantage of this new set of tools? To positively interact with their emerging careers, their interests, how can they use these parts of what they might like? Maybe they don't like everything. Maybe they only like ChatGPT. Maybe they only like the visual side. Maybe they only wanna make videos and right now there's not that many tools for that. Maybe they wanna learn how to get into it. How can they take those first steps?
Tery:
Wow, Daniel, you're a tough act to follow. Um, so first and foremost, you know, anybody who's watching this, just hit me up. I'm happy to teach anybody. You can hit me up on Twitter. I'm just at Tery on Twitter. And I, I've been doing a lot of education in this, like, like Xander and Kenn. Um, you know, I, I tend to educate the seniors. But over the summer, I'll be educating the youngsters. We're gonna do an AI hackathon here in my community, which is very cool. But education, like find those resources. Like AI Art Today on Twitter is a great space that happens every single day except the weekends. And we talk about all the tools and all the things like we just talked copyright. It has... I think we spent like almost a month just talking about copyright, you know, all the tools that we're using, the ethics around it, people that, you know, the artists that are doing such great things like ourselves, you know, I think that's, there aren't any books, except I have a book coming out in a couple of months on brand and AI, another book, that would be 10. but it did a ton of books. But experimentation too, don't be afraid, you know? Go in and try things out. I know I hit really hard on the whole ethics thing, but you know, don't be afraid to at least experiment and use those. So, you know, try all the tools to see what you like from chat GPT to mid journey to even Playform. Gosh, hit me up. I'll get you, you know, some free tokens on Playform if you wanna try it out. Um, but yeah, experiment. Like, like I think you mentioned, uh, like video, like I've been doing a lot of experimentation with video because I want to take my, my science fiction novel, Laundry Gate into a video that I can control and show the characters the way they should be. Like also, you know, um, beyond darkness is another, like, sorry, Kenn, I did go ahead and use, uh, mid-journey and I did publish my book, so I do have a copyright on it. I did do it, but I want to take that book and make it into like a video. And so I'm learning. I'm learning things that, you know, like, you know, that I haven't used in decades, like on macro media. Oh my God, did I just date myself? But that's cool, just get out there. Join the Google's AI test kitchen. I'm on there. I've been using it. I got to train a LaMDA, and now I'm in the MusicLM. So it's very cool. There are so many resources, and of course, all of us. So those are some of the things.
Kenn:
You're tremendously inspiring to me, Tery, because you've produced these books, you participate in the community, you educate and you express it so well and so clearly. What a treasure, yeah.
Tery:
Thank you.
Daniel:
And let's go on from, okay, so we did Tery, now Kenn, same question. What's the best first steps that you can recommend to incoming interested parties?
Kenn:
Okay, the first thing I'd say is exactly as Tery's mentioned, jump right in. And as you've mentioned, jump right in and don't be afraid to break things. Type in a prompt with mid-journey and find out what the result is. Get to know what the contours of chat GPT's responses are like and where it seems to have great content and where it might be trying to make things up. And I would also have the perspective that this is all taking place in the continuum of human development. That if we could think of our thoughts as manifest in chat GPT or mid journey, they're like echoes in the corridor of time. So to maintain perspective on that, as an autistic person, I found getting back into meditation helps a lot because it's easy to jump into the ocean of these possibilities, but then to create with them as well. As Tery has mentioned, you've created publications, you've participated in community and this multi-dimensional aspect of living within generative AI potential, I think, can really be nourishing, but to stay in touch with your center as a creative personality. So experiment, break things, and keep on trucking.
Daniel Potes:
And we'll end it with Xander, and then we'll have one last little end-all wrap-up hangout.
Xander:
So I think one of the first things and cool things for me that I do when I experiment with new technologies or new fields, even like during the pandemic, I became a very, very big MMA fan. And just because of all of the intricacies of all the different martial arts styles and how they apply them and the personalities and I was just super interested, knew nothing about fighting. So, I mean, I didn't know the difference between a hook and an uppercut really and a jab or any of these things. So basically what I did was I, and kind of like you guys have mentioned, I found people who do talk about it and I followed them. The things like you said, the things I was hyper interested in, I could find online somebody else who was just as into it, hopefully, and then just immerse myself in those communities and worlds. And then also finding influencers or minds that you can, that resonate with you. If they talk about stuff you like, and they do stuff you like, just follow them. Try to be as active as you can. Don't be afraid to leave a comment and ask a question. Like Daniel said, there was this Steve Jobs quote that kinda, I always will remember it cause it kinda pissed me off in the moment, but then later I got a really good insightful revelation. And he said that like, anytime he wanted something, he would reach out to people and the universe, doors would always open, you know, and he would always have that access and was always gain it. And I was like, well, it's great for you, Mr. Steve Jobs, but I don't have that same network of opportunity. However, you know, this, you know, X amount of years later in this new paradigm, now we do, you know, now we can connect and within two or three clicks, reach out to sometimes it's positive, sometimes not, you never know, but if until you try, but more often than not for me, when I have reached out to find those people or those things that I'm interested in and very passionate about, the doors have always been open. So I would just say, you know, encourage you to stay curious or as Steve Jobs, stay hungry, stay foolish, and just keep reaching out, reaching out for the things you're interested in and chasing it. And yeah, you know, use those people who have failed before you as inspiration. And know that growth usually happens there in that exploration. Um, and interest and just nurture your natural curiosity, you know, wherever that may lead you.
Tery:
That's great.
Daniel:
What is the one thing in the AI space each of you is most excited about, and we can start with Xander.
Xander:
Hey, I am most excited about the relief, but you know, this may be still serving, but the release of my NFT project in Metaverse experience, Let Go, and the AI Journey X1s. I've been working on them for a year, and my goal was to like, start utilizing the tools in a real like recognizable way, because like, I don't know for a hundred percent sure, but I started realizing like, Mid Journey itself and some of these AI tools themselves have a look and a feel. I don't know if the artists can differentiate themselves with their particular style, but maybe they can implement it for a particular use case and be associated with that. So that's what I did with the AI Journey X1s as a way to really give myself a good use case and implement it directly into my creative process. And then at large, it's a part of Let go, which is a metaverse mosaic where you travel, it's a quest based mosaic where you travel through nine worlds, collecting mythical objects to get this ultimate seed phrase that has a really deep meaning, a deep meaning about the nature, a lot of the things of the nature we've been talking about, expression, consciousness, and some of those more abstract concepts. The other part is because the AI journey is one, the use cases is to get people more into our world, you know, an easy gateway. Get them into the tools, get them, get them free merchandise and let them come have positive group experiences in our industry. Sometimes there can be waves where the press negatively portrays us and for good reason sometimes some of the characters in there is like you said like not all of us are in it A for the trendiness or B just for the money grab. So there's really genuine people who are interested in the technology and connecting with others and helping others do the same. So, yeah, I encourage all of you to check that out and try to join the community and at the very least play the experience. Keep a lookout for it. It's called Let Go, a metaverse mosaic from Xander Love.
Daniel:
All right, let's go with Kenn.
Kenn:
Okay, well, tough act to follow because Xander has it all in place when it comes to the sound and the music and the theme and the color and the shapes and the rest of it. So what I'm going to do off of my own little metaverse, creating a Ruby Room 2, which is almost completed. And I hope you forgive me for the nerdy name, but I'm calling it the Cathedral of Funk. It's going to be another place to hang out. And what I'm calling part of a rebel metaverse, a place where we can put up tags, a place where we can make the spaces our own instead of just going through... the commercial side of things entirely. And the other thing I'm excited about is creating the League of Extraordinary Talent for neurodiverse individuals to rebrand ourselves, to represent ourselves differently as superheroes in our own milieu. So that's going to be a great creative endeavor as well. It's all based on a comic book look. So that's going to be impossible to ignore.
Daniel:
Alright, and Tery, how about you?
Tery:
Oh my God, that's a loaded of questions. I have so many projects going on, but I'm gonna stick with, I wanna learn the music AI. So like I said, I'm working on the music ML. So I did a few little things and pushed out there using music LM and Deforum. So Deforum gives me the video. So I wanna explore that more so I could create my own content from that, but the thing that I'm hoping for in a little bit of blue sky for the future, and since we're all loving science fiction, I really do want my own AI assistant, but not like the assistant kind of thing. I want this like companion that, you know, I could have these deep discussions with, that I could trust to not be a gofer, but like bring me back like some things that are, you know, deep and open my mind some more. So I'm hoping that's where the future takes us in terms of AI and creativity and our own imagination, which is so incredible. So thank you so much for having me on this. This is so much fun.
Daniel:
So much fun. And I'll end it with kind of the same note. My biggest excitement and the thing I've been working on now for four and a half, five years, is the blossoming field of 3D generative AI, which is now doing some amazing… we're finally there, guys. It's been so long. I was doing this stuff, you should see my independent study for 2018 at Pratt was literally, making an AI generative plugin for 3D assets for Unity, which I did not do. And it's still not done by Nvidia, by no one's done it. So it don't, you know, I got in over my head as a, but we're so close, get 3D. There's so many assortment of blender plugins. If you're at all interested in the 3D space in terms of asset rendering, making fun 3D printable objects. I've got a fun example. Look at that. That's an AI generated chair made in mid journey, processed through KDEM and then printed on a 3D printer. So like, yes, it doesn't look really comfortable if you sit on that, it won't be very nice, right? But the premise is we're at the point where I can get a file out of the AI that I can give to my printer with minimal, this one required a little bit, minimal human input. to get just printed. I can just get it. Like if I sent this to a fabricator, they would just make it. You know, it's not, there's no additional steps really other than the workflow that you set up for 3D and that's getting easier and easier. So that's what I'm most excited about. I think the idea of metaverse spaces populated entirely by unique individual input is the future of immersion. And then it's the future of personalized immersion. It's the future of personalized storytelling, which is like... Like obviously, Kerry knows, obviously, Kenn and Sandra know, like it's storytelling, even though we're in tech, even though we're using AI, that's why we're here. That's why we do art, that's why we use technology, it's to tell our stories. It's just the best tool that we found to tell that story. I sure wish I could paint, but I can't. So I guess I'm gonna use AI. And with that, we're done, guys. That was pretty awesome.
Tery:
That was so great.
Kenn:
Yeah!
Daniel:
All right, guys, hello and welcome. We are doing the rise of AI, transforming work, daily life, and the path to new careers for Hundo. And with us, we've got some amazing folks. We've got Tery Spataro, I hope I'm saying that correctly. We've got Kenneth Mayfield, or Kenn. And we've got Xander Simms, the legend, the absolute legend. We've been friends and following each other for a long time, so it's amazing to finally be in the same virtual room with you. Let's have some introductions. Let's start with Tery and go to Kenn and then end with Xander
Tery:
Okay, I'm gonna make this brief because I've got several decades on all of you. Um, so I call myself an AI creative director. I've been using, uh, creative AI since 2020. Um, I got my, I got inspired by artists and Spelter who gave this really awesome demo using Playform and then it was love at first bite after that. So I uploaded all of my old artwork, the traditional old artwork, and then created my own models. Speaking of traditional, I do have a traditional art background, commercial background, did a lot of agency work and held a lot of executive positions. But this is the one that I love the most, being a creator that's using AI.
Daniel:
Awesome, alright, let's have Kenn.
Kenn:
Okay, well, I knew from a very young age that I wanted to be an artist. I tried to get into art school early, but I had to wait until high school. Um, beginning in college, I worked in programming. I grew up drawing, did photography, uh, started doing audio and video engineering, uh, at the artist run center in Halifax, Nova Scotia. And then from there, I got into more serious programming, uh, with flash for the multimedia side and then iOS for the device and to learn a real language. And after about 10 years of that, I decided that art was beckoning again and I could not resist the call. So I got back into visual work, blending unity and the multimedia skills I'd learned. And now of course we have the rise of the metaverse or the rise of immersive reality and AI. So I get to bring all of these ideas and skills into one theater, which is this. I'm also on the autistic spectrum and I advocate for people in business and in work and in school, on that side as well.
Daniel:
Thank you Kenn. Now Xander.
Xander:
All right, can you guys hear me okay? All right, wonderful. Yeah, so my name is Xander Simms, creatively known as Xander Love. I'm an artist, technologist, and designer from St. Louis, Missouri. I've been in the space of creative expression for a little bit over a decade. I started with studying music and vocal music and instrumental music and production. And then I moved to the visual arts. And along the way, I, the creativity had done such an amazing job, helping me express myself. I also gained a passion for helping brands and individuals express themselves. And so I married the two and got to work with a lot of brands and individuals to do some really cool projects, mostly in video for social media, content strategy, strategic marketing plans, and that kind of thing. And i n 2021, I made a pivot to deep technology. I got a really cool contract with a space agency out of Houston, Texas. And that was the first time I got to kind of marry my love for science fiction and technological innovation with commercial endeavors. And so after that contract was up, I was looking for the next technology wave or bubble to continue exploring. And that's when I discovered XR and the metaverse and those sort of technologies. I started using artificial intelligence as an additional layer for that in about 20, about a year ago, yeah, last summer. So it's been about a year. And so my journey for the last year has been in incorporating those tools into my workflow and helping brands and individuals express themselves and build new experiences.
Daniel:
So as you can see, we've got some pretty awesome talent. I'll do a real quick introduction for myself, but I'm not in the same category. I wouldn't say I'm one much younger and two kind of a fresh graduate. I graduated in 2021, I guess. It was a two-year master's program at Pratt where I did digital arts. And so, what is digital art? Very vague, very random term. Ends up AI is one of... the digital arts, you know? So I kind of got into it super early in 2018. I've been deep diving, worked as a professional AI artist for Sehan Lee, who now does Kubrick and a bunch of other really cool stuff. And now I work in the agency field that Tery is thankfully out of doing basically innovation design and experiential activations using, you know, as cutting edge of technology as we can get our clients to pay for. So, you know, have a lot of fun doing cool stuff, get to work on some really fun, joyous installations, experiences, et cetera. So we're excited to get started. We're gonna go with the first question, and this is gonna be to Kenn. How do you use AI in daily augmentation, like to affect changes to your everyday life with this new technology? And can you give us some examples of how, for you personally, AI has made both personal changes as well as maybe artistic expression changes.
Kenn:
Superb. Okay. These days I'm using different styles of AI. I'm using chat GPT to give myself a fresh viewpoint on what I write because I tend to use a lot of commas and write in a 19th century wordy comma style. So this kind of brings me up to date a bit. And with the mid journey, that's been a Teryfic boom. I use that to kind of revitalize my imagination. I was outside of art or art creation for about a decade while I focused on coding, but art has always been the siren song or the spirit inside of me to follow from a young age. It became more from that point of figuring out how to make a living. So now what I find with mid-journey is that I can type in a few sentences and then get back something that has a completely different spin or amazing detail or incredible style. So I'm using that for non-commercial products like promotion. And I'm using it to inspire myself. And the biggest difference has been the shakeup that it's created in terms of how I think of colors and composition. And it's also in a way an insight into not only the entire history of art that's been on the web, but also perhaps what modern styles are as well, like in comic book rendering, in 3D rendering. It's such an instant palette to draw from. That it's like sitting at an amazing emotional perceptual feast. So that's inspired me to begin creating more work by hand in sculpture and painting and a little bit in 3d. I'm trying to reduce the amount of time I spend on the computer and increase the amount of time I'm spending deep diving into myself, my imagination and to keep nourishing the colorful side of it. And so mid journey has been essential for bringing me back into the fold of artistic creation and also what other people are doing. Because I think the algorithms are weighted very much on popular culture. And the more represented something is on the web, the more prevalent it'll be in mid-journey. Now on the other side, I was fortunate enough with a client to win funding to create AI for avatar instruction, thank you, and immersion in industrial metaverse. So there I'm creating adaptive learning with AI, they're guiding the development of the algorithms. And we're also working on American Sign Language recognition and speech through avatars, because accessibility is very important to me. And I think the AI can help give us another shot at creating an accessible universe or accessible internet. So that in itself has been a journey because ASL is such a multi-layered language. and what's called a co-topic language where you can describe two things at one time. I'm not a speaker, but I've always been in love with the idea of non-verbal communication because communicating with people as an autistic individual has always been a lifelong challenge. What are the keys? What are the locks? What are the music notes that connect with people? And ASL is very much like that. It exists in a different continuum. And I find that extremely fascinating. So AI is text and images and communication these days.
Daniel:
Wow, that is one, on a personal level, I love that. That is so spectacular. Anything accessibility should be done needs to be done. And the fact that you're doing it is phenomenal. It's also a use case I wouldn't have ever thought about in advance, but thinking about it is actually like, wow, yes, it's so much better than captions at every possible step. Teryfic, wow. Tery, Xander, do you have anything to say about that? Because I was, that's awesome.
Tery:
I'm so impressed. I love it. I think it's so important to incorporate the accessibility because AI should be for everybody. It's not just for large corporations to own, it's for all of us. And I just love the way you talked about your workflow and what you're doing with it. It's just great.
Kenn:
Thank you.Thank you Tery.
Xander:
Yeah, definitely. Great point. And to add to that, I thought it was really interesting how you're using MidJourney to kind of free yourself also from the screen and, you know, explore other avenues of cross-reality or IRL expression.
Kenn:
Thank you, thank you. Well, you both inspire me to great lengths because every day I'm seeing what you're putting online and that becomes part of the accelerator to live in this universe of visual communication and hopefully soon sounding communication as well.
Daniel:
Okay, and so we're going to transfer to a little bit unrelated question, but now that I think about it, it's actually incredibly directly related to accessibility, to education, and to kind of providing an outlet, an avenue for new users potentially. This is for Xander. You're working with small children, teaching them on using chat GPT mid-journey. How do you see AI impacting the next generation, both in terms of education and just like... Workflow as weird as that is to say because it's not like they have a workflow You know what I mean? Like how is it going to affect the standards set in place for the next generation of workforce?
Xander:
Absolutely. Yeah, I do have the pleasure this summer of instructing a few sections of artificial intelligence for a few age groups. The first one is five to eight, and then nine to 12, and then 12 to 18, which is I have a background in education. It's actually one of the reasons I got into technology. I had this really incredible experience where There was this kid I was working with, he was nonverbal, he was on the spectrum. And long story short, we ended up integrating this app to allow him to be able to speak with any device, like any tablet or smartphone or anything. Him and also his other students at the school who were before using like pen and paper or flashcards to speak. And so that was the first time I had seen technology, in a real way, help people communicate outside of themselves and just open up new worlds of possibilities. And that was really started me on the journey of like, okay, I, you know, technology, while it has incredible commercial applications, like on a personal human to human level, like this has the potential to unlock a lot of things and not just the technology itself, but the knowledge and application of the technology for the needs. So That was what really inspired me. And let me tell you, like this summer, this is only week two I just finished, but they've already helped expand my vision of the application of AI technology, like by tenfold. I kid you not. They are so receptive and just imaginative for the possibilities of the technology. Like the phase of explaining it is so brief, you know, and they instantly just adapted kind of like, well, how? Well, how can we do this? Or what can we do with it rather? Not what is it? And all the ethical questions and technological questions so much as just like how we can adapt it into our lives and make it fun and entertaining. And they've given me some tips on my NFTs as well about different ways to make it cool. And I've, yeah, I've been absolutely blown away. And the teachers were actually also interested in their reaction. So far because of the engagement. And I would say the nine to 12 year old group was probably the most active in understanding. Cause I blended the, not just AI, but the metaverse in 3D too. And they just took it to another level because this is where they live at with the more immersive video games like Roblox and Fortnite and Minecraft. And so it was just like a revelation to how fast they took it. And we're just starting generating their images, taking their prompts and generating images from their description. And we're going to reveal them next time. So that's where we're at so far. But it's been an absolutely incredible experience just watching them really take the technology and the concepts and apply it with no friction.
Daniel:
That is so spectacular. I also come from a small, like a STEM, steam background where I was teaching kids science, technology, engineering, art, and math. SoI vibe with that super hard. To be fair, it was always the science part. I liked teaching the most even though I was hired for the art, but I just, you know, nothing like a kid in a microscope, that's so much fun. It is truly spectacular. The idea that we're literally providing them a new outlook into a new reality is so amazing. All right, we're going to switch over to Tery with a little question on something that was brought up that kids seemingly don't care about, but maybe we should all be thinking about talking about a little more. What ethical considerations should be addressed when deploying AI in work environments? How can we ensure responsible and inclusive use and accessible use of the technologies, especially when it comes to like... creators and then teams associated with creators, right? Because obviously video editor, you got your colorist, you have, there's very many segments of industry that are gonna be impacted. And how do you view what should be a set of considerations in this space?
Tery:
Oh my God, Daniel, that was such a great question. And it's interesting. I don't know if you all heard the news this morning or maybe it hit last night. Google basically is clamping down on anybody that works for them and using chat GPT and using it in ways that they don't feel are going to be good. So, you know, I think we all need to be, you know, even. conscious of what we're doing with our AIs. I'm part of this morning program on a Twitter space called AIRtoday and the founder of that always leads the program or signs off by saying, treat your AI right. But in terms of ethics, I think this is a huge question. It's not something I could take and talk about in just a couple of minutes. But there are a lot of categories, and I think businesses need to have a policy. And I hate that word policy. I feel as an artist, policy feels like, you know, I'm being censored or something. But I understand, like a couple weeks ago, I was in New York visiting some of my old cohort cohorts in the agency world, and I heard stories about folks that are so excited, like myself, using chat GPT and... you know, some of them were saying, I'm uploading client data into chat GPT. And I'm like, is that on your local? Or what are you doing? Don't do that, you know? So I think policy needs to be put in place on like usage and also education. I think with policy, there always has to be education. So ethical considerations, let's first and foremost say something that I think we all been talking about a lot which is humans in the loop, right? Xander on educating, Kenn on workflow and how he brings things together. We all need to make sure that there are humans in the loop, right? Because we all have to be a part of it. I found seven areas to focus on in terms of ethics, bias and fairness, privacy and data protection. transparency and explainability, accountability and liability, inclusivity and accessibility, which can touch down, human oversight and control, so humans in the loop, right? Continuous monitoring and evaluation are really important. And also developing that policy. I think every single company, oh, sorry, is that me? Has to have some kind of policy. They put forth and outline and educate everybody so that you don't like pollute the AI, first of all, with nonsense or do anything nefarious to it, or have those accidents in which your data is being exposed some way somehow because we don't really know, right? And also treating it right and having all those considerations of like when I'm prompting, I'm not, I'm prompting with positive prompts, you know, not, you know, things that are going to be a problem.
Daniel:
I've definitely gotten mad at Chat GPT before though. No Chat GPT, that's not what I meant. Please, give me what I want.
Tery:
So yeah, there's a lot to consider. The ethics are huge, right?
Daniel:
Those bullet points, honestly, I would give you five minutes at minimum per point if I could. It's the least that is deserved. It's a topic that is not brought about enough. I'm going to still extend this a little more. I'd like to hear what Xander and Kenn have to say. Let's start with Xander. In terms of just general ethics considerations when it comes to both applied AI, AI's impact into our lives. Love what you said about privacy, accessibility, all of that. What do you guys think?
Xander:
Yeah, I think a lot of those notes she mentioned are incredibly important. And then also, you know, and kind of like cybersecurity and a few other things. I'm hoping the experts in those fields really make it more open source, I guess you could say, and come together to come up with the best solutions. It's a little bit above my pay grade, but I do know how important it is. So I support all of that. And then the other part is as far as, you know, like with creative commons use. And I think there's going to be, which I had heard a lot about this in web three, but the future of ownership and IP are going to have to radically change. Um, I think, so I'm really looking really, I'm really looking forward to it. Um, as like more adaptation happens as well, and we can prevent, um, uh, models being trained off of artists and people who don't want to train, I think that's really important. And I love seeing that on some websites like Sketchfab. I love Adobe's integration with the train models from licensed material on their platform. So artists and creators can really create freely and kind of have the licensing bit taken care of out the box. I think that's a really cool model. And I hope to see more creative tools implement those sort of things. So it's fair. I think that's really important.
Daniel:
Kenn?
Kenn:
Tery has already mentioned, of course, the idea of legal and liability. And Xander has already mentioned, of course, the idea of privacy. So what I think is important to consider is that, especially say with ASL, if we're going to capture people's contribution to ASL, then that's going to be personally identifiable, even if we anonymize it and we track only the vectors that represent body position and hand position. There's still going to be a dialect. There's still going to be kind of a personally identifiable accent, I think, to that communication. So that's something to look at. I think that in a way, as Xander mentioned with licensing, we almost need to have an AI that monitors the AI so that a recognizable style can be flagged in some way. But the flip side of that, of course, is the ability to be paid as a creator. So I'm concerned about, as Xander mentioned, As Tery's mentioned, I'm concerned about authorship and ownership, which is why I don't yet sell any of the mid-journey creations that I have. I don't have enough of my own hands on them, although Adobe has solved that. So originally, the provenance of NFT was to create secondary sales income. But how do you do that if you can't prove ownership of the concept? So it becomes complicated. And to get back again to the privacy. I feel with all of this computing power that's being placed into chat GPT and mid journey that it might become easier, and this is outside my pay grade, to identify the holes or the shapes that a person exists in within that AI footprint, that data footprint. Tery has mentioned the input of commercial IP, and then there's also the input of privacy, private IP or thoughts or concerns that we have. So it's something that we really do need to figure out. And because it's coming at us so quickly, it feels like we're already in this science fiction future of living in an AI world and how do we catch up to the machine? So, and I too, I'm very polite. I always say please and thank you. Although occasionally I do kind of get a little bit frustrated with ChatGPT. As Neil Gaiman pointed out in one of his tweets, ChatGPT creates information shaped sentences. So you have to always go into the sentence and do some follow-up to see if what it's saying is bona fide.
Daniel:
No, yeah, 100%. And I just want to kind of come in and do the same that Xander did in that when I was doing my, you know, cursory initial research on the Web3 space, what I found most valuable was as a digital artist, as someone who's done a lot of research in terms of curation and collection of digital archival works, it's kind of weird. It's kind of tricky. It's like, if you're a video artist and your video art has an installation component, there's like, how do you give them the file? How do they store the file? If you've got a projection mapped piece that doesn't require it, maybe it's a facade of a building, how is that collected? How is that stored? How do you know who made it, et cetera? So I think that what we're getting after this kind of web3 push in the last couple of years to now, what is obviously the AI push that's going on currently, we're starting to combine these base concepts of like ownership and provenance and collectibility. And making sure that the root of the creation, the artists are paid and credited. And I think it's one of the more exciting things for me in terms of kind of this. new breadth of new technologies is how we're rethinking what to do with them and how to store them and how to share them and how to do it all in a way that helps as many and hurts as little as little as possible. So I think that thank you all for the great answers. It's awesome to know that other people are also thinking about the same stuff because it's very important. Ethics is super important, especially with applied AI. Tery, like you said in the very beginning. You made your own data set and you've trained a model with that data set. And that's your work. And I think that that's the future of applied visual AI is really like IPs taking control of their own IP, maybe providing access to artists and creators via some, you know, portioned out allocated amount per sale. But that openness is really what AI has done because again, the best AI is open source and the ones that aren't are the ones that you get problems, you get sued, you get issues. And if we want to do a proper open source ecosystem in the AI space, we have to get open source funding. We have to get ethical trained data. We have to use properly paid like image segmenters and people who do the micro adjustments and add detail tags to images so that we can then use them to train our data. Because people need to be paid more than $2 a day. So it's starting these conversations and then finishing them with action that I think is something we really need to start doing more of. And I'm glad I'm in a room surrounded with people who do that. It's awesome. So we're going to go straight to Kenn with this one. How is AI being used for employment and creating inclusive opportunities when it comes to people on the spectrum, ADHD, other mental health issues, neurodiversity in general? How has it benefited? How has it maybe negatively impacting? Like what's your opinion on how AI employment and those with disabilities can kind of mesh into something that is better for the future?
Kenn:
Okay, well, that's a great question. And it's a mountain of other questions with many trees of details. So I'll make my path through it as well as I can. In the last few months, since the summer, I was accepted into the startup Wiseguys pre-accelerator program. And initially this was for the Ruby Room, which was my environment for presenting culture and ideas in which Xander took place in. And from that, I came up with the idea of a league. where we could maybe start to rewrite the rules on autistic representation and employment. And that led to considering, well, within an investment situation, what would be useful? We could start to talk about AI in terms of matching autistic skills and talents and focus and accommodations into work opportunities. And it's a little bit amorphous right now because we need to talk to the autistic community. The old saying, of course, is if you've met one autistic person, you've met one autistic person. because we're very individualized in terms of our likes and preferences and skills and needs. So this is an interesting problem because employment to my mind is basically set up in an industrial model. And in true artistic form, if I'm visualizing a word and then trying to say it, the two clash and it doesn't quite work out. So I'll try and stay above the surface of that ocean. The employment has its own built-in biases in terms of how These days, resumes are input into the system, how interviews are proceeded with, occasionally how videos are analyzed in terms of is an applicant truthful, are they confident? These are all issues that come up when you're an autistic person. If we admit that we're autistic, then we may be passed over for a position, particularly one in a position of authority. If we show autistic traits, like you see me moving around very often or looking off to the side, then those may not come across as well to an uninformed employer. And once you're within a role itself, there may be a limited time span for it, because sooner or later, the interactions with the neurotypical community or the company culture will begin to put stress upon needs for accommodations for an autistic individual. So what we hope to do is to rewrite how we're perceived because is it really a question of disability on the autistic spectrum, or is it a question of conflict with the status quo? There was an experiment done where groups were created. One was a neurotypical group, one was an autistic group to measure how well they could solve problems. And it turned out they were both about equally as competent in solving problems. But when you created a mixed group that dropped So it's a question of communication and accommodation. So what I'm hoping through AI and through the immersive world is to be able to kind of, and especially through mid-journey, for example, which creates so much visual material, to try and widen the scope of how we're accepted. During the creation of my pitch for this accelerator, I found that the statistics for employment were really drastic. More than half of us have a secondary education. But most of us, say 2.4%, it seemed in the EU, or 15% in most of North America, it's a little bit higher in the United States, have employment or steady employment. And when I say it's a limited lifespan, autistic individuals, and I know I'm going on, I'll wrap it to a conclusion.
Daniel:
No, no, please, this is important. You know, we need to, this is what technology is for, in my opinion. Go until you're done.
Kenn:
Thank you, I appreciate it very much. The time spans in between employment or steady employment can be quite drastic. For me, it's been about three years between employment opportunities. And so I've had to make my own, which is why I became an entrepreneur and to have more control over accommodations and what I could accomplish. Other individuals I've heard seven years, and these are married individuals with house payments they get by, by getting little bits of work until they hopefully eventually find a role that really suits them well. So I think with the evolution of AI in the metaverse, there's a lot of opportunity for building. And I think that autistic individuals and individuals, ADHD, dyslexia, and so on, we have different talents in terms of, say, pattern recognition or alternate point of view, which innovation depends upon. And in terms of focus, that we statistically are also more productive than neurotypical groups, which... may be part of the problem. We do so well that we're, you know, it creates conflict. But we also have bias towards integrity as well. We have a difficult enough time puzzling out very subtle divergences in communication, sometimes intentionally misdirecting or not, that we often ask, can you clarify that or become more specific with it? So that implies that, you know, anyone on the spectrum can be a liar or difficult or have their own problems, but by large, we're focused on just trying to function within a neurotypical environment. So put these traits together, an alternate viewpoint, deep interests in a specific area that we might be focused on at the time, special interests, the ability to really focus on what we're doing, increasing our productivity, and the ability to really be concerned with communication seems to make us a perfect fit for evolving this new spatial web, this new spatial universe, for innovating solutions, for creating things that can really pay back multitudes of value for employers. But we have to get around the bias issues. We have to understand, or the neurotypical environment has to understand how we can navigate or sidestep these biases and make room for our particular characteristics because we've been around for as long as people have. One in every hundred babies created is autistic. So this is an adaptation, I think. It's not a disability in the sense that not being able to see or not being able to hear may be. Exactly. So let's make room for all of us because in creating the solutions that are increasingly planet-sized, climate change, AI, and the rest, let's make room for the genius that exists everywhere. And that's what I hope we can do through these opportunities in AI. Thank you.
Daniel:
Heck yeah, that was an amazing answer. Oh my gosh. Yeah, so I'm, if you can't tell, very, very ADHD. So I appreciate that a lot. It has taken me many years to kind of figure out my place, my best mode of being. And I definitely think I'm there now, but you know, I'm 30 now, so it's taken me some time. I can only imagine, you know, the work and pressure that you put on yourself, but... it's amazing to have someone like you in the community, in the industry, making waves, making changes, and thinking about this. Because again, your generic neuro-typical person isn't gonna necessarily even know that they have to be aware of stuff like this. But this is what technology allows for, right? I think some of the more impactful VR, AR, mixed reality experiences I've ever had have been those that give me someone else's perspective.
Kenn:
I completely agree. I tend to think of VR as theater, that it's another step in the evolution of theater, where we originally would paint on cave walls, we'd have ceremonial processes there to handle the world and have a deeper understanding of oneself. Then we had Greek theater and amphitheaters, use of masks, a narrative, then Mélia's, we had this imaginative studios at the turn of the century. And then of course cinema. And now we have VR and it's a way of stepping onto the stage with a performer being in their skin as it were.
Daniel:
Yeah, it's the empathy that can be built with technology is so powerful, and it's not nearly done enough. And it's often done for the opposite purpose, and it's somewhat frustrating. So we're going to, we have one end group question, but we have one before that. So let's go back to Xander. As a creator now, how does AI help you enhance your projects you work on? the digital Xander Love collection to other NFT projects to, you know, even your work with children and just general education and kind of communicating, whether that's a brand journey that you're trying to communicate with AI or with the help of AI, or, you know, it's your personal social media presence that is badass and you have actually amplified so well and you're an icon within the community.
Xander:
Man, thank you. That's so humbling. I appreciate it. I'm just like, that's incredible. To answer your question, first of all, my brain is just so going from all the conversations here. These are some of my favorite influencers in the world. So hearing them, hearing this talk, I'm trying to stay focused. But yeah, one thing that I, a really interesting revelation I had, a few years ago, before I even started experimenting with generative AI, and this current trend was super into science fiction, like I said, and I can't remember what the film was, but it talked about how we, or maybe it was something I read. I don't remember where I got this concept, to be honest. But it was how when we think of artificial intelligence, we usually think of it increasing our capacity for, you know, work and maybe general intelligence. However, one thing that is often maybe overlooked or forgotten is its capacity to increase our emotional intelligence with increased empathy. So, you know, even as Kenn was speaking about how when they're separated and non-typical and typical divergent groups or whatever, the communication kind of falls, but I was thinking of AI kind of being a mediator of that. And one of the biggest things I've learned from using AI for the last year is how I can be have to be very deliberate in my Words and speaking to it, you know and how sometimes a lot of times actually with humans with each other we're very vague and we don't, like a lot of times, we don't really fully know what the other person is saying, we just have to hope we are right You know But with artificial intelligence, it's a really good exercise and a lot of creative arts and being very deliberate and specific about what you're describing and the images that are in your head, how to translate those to other people. So I think to get back to your question, that's where it's been the most freeing for me. Someone who already before journey with AI, my goal I've had or as I the kids in my classroom, the superpower I've been trying to develop is kind of like this quantum ability to create at the speed of thought for myself or for others. And I think generative, I think generative AI has really enhanced that. And that really raises empathy because like getting not just myself, but being able to see other people's worlds that have been, you know, trapped inside of their minds that they can experience and see and feel, but no one else can. Instantly now we can share those as visual thoughts and then as much as we're seeing those, it's increasing our palette to appreciate and understand and maybe even just visualize, whoa, that's what's going on in other people's heads. That's really cool. And, you know, also as an artist, it helps me even push the boundaries of imagination, my own imagination and asking it to envision things for me and then kind of curating those realities and sharing them. More has allowed me to, you know, reach beyond some limitations that I had and concepts that are in my head for years I've been trying to develop put out into the world and now I can clear that space and share it and get feedback and just continue the loop of self-expression and self-actualization that comes with creating and sharing those experiences. So it's like it's every day. I'm just so grateful for the time that I'm alive and just being able to literally as far as like, dude, I can't believe this is incredible. It's so cool to be able to see this.
Daniel:
How can everyone prepare themselves in the future of AI driven everything, right? How can they take advantage of this new set of tools? To positively interact with their emerging careers, their interests, how can they use these parts of what they might like? Maybe they don't like everything. Maybe they only like ChatGPT. Maybe they only like the visual side. Maybe they only wanna make videos and right now there's not that many tools for that. Maybe they wanna learn how to get into it. How can they take those first steps?
Tery:
Wow, Daniel, you're a tough act to follow. Um, so first and foremost, you know, anybody who's watching this, just hit me up. I'm happy to teach anybody. You can hit me up on Twitter. I'm just at Tery on Twitter. And I, I've been doing a lot of education in this, like, like Xander and Kenn. Um, you know, I, I tend to educate the seniors. But over the summer, I'll be educating the youngsters. We're gonna do an AI hackathon here in my community, which is very cool. But education, like find those resources. Like AI Art Today on Twitter is a great space that happens every single day except the weekends. And we talk about all the tools and all the things like we just talked copyright. It has... I think we spent like almost a month just talking about copyright, you know, all the tools that we're using, the ethics around it, people that, you know, the artists that are doing such great things like ourselves, you know, I think that's, there aren't any books, except I have a book coming out in a couple of months on brand and AI, another book, that would be 10. but it did a ton of books. But experimentation too, don't be afraid, you know? Go in and try things out. I know I hit really hard on the whole ethics thing, but you know, don't be afraid to at least experiment and use those. So, you know, try all the tools to see what you like from chat GPT to mid journey to even Playform. Gosh, hit me up. I'll get you, you know, some free tokens on Playform if you wanna try it out. Um, but yeah, experiment. Like, like I think you mentioned, uh, like video, like I've been doing a lot of experimentation with video because I want to take my, my science fiction novel, Laundry Gate into a video that I can control and show the characters the way they should be. Like also, you know, um, beyond darkness is another, like, sorry, Kenn, I did go ahead and use, uh, mid-journey and I did publish my book, so I do have a copyright on it. I did do it, but I want to take that book and make it into like a video. And so I'm learning. I'm learning things that, you know, like, you know, that I haven't used in decades, like on macro media. Oh my God, did I just date myself? But that's cool, just get out there. Join the Google's AI test kitchen. I'm on there. I've been using it. I got to train a LaMDA, and now I'm in the MusicLM. So it's very cool. There are so many resources, and of course, all of us. So those are some of the things.
Kenn:
You're tremendously inspiring to me, Tery, because you've produced these books, you participate in the community, you educate and you express it so well and so clearly. What a treasure, yeah.
Tery:
Thank you.
Daniel:
And let's go on from, okay, so we did Tery, now Kenn, same question. What's the best first steps that you can recommend to incoming interested parties?
Kenn:
Okay, the first thing I'd say is exactly as Tery's mentioned, jump right in. And as you've mentioned, jump right in and don't be afraid to break things. Type in a prompt with mid-journey and find out what the result is. Get to know what the contours of chat GPT's responses are like and where it seems to have great content and where it might be trying to make things up. And I would also have the perspective that this is all taking place in the continuum of human development. That if we could think of our thoughts as manifest in chat GPT or mid journey, they're like echoes in the corridor of time. So to maintain perspective on that, as an autistic person, I found getting back into meditation helps a lot because it's easy to jump into the ocean of these possibilities, but then to create with them as well. As Tery has mentioned, you've created publications, you've participated in community and this multi-dimensional aspect of living within generative AI potential, I think, can really be nourishing, but to stay in touch with your center as a creative personality. So experiment, break things, and keep on trucking.
Daniel Potes:
And we'll end it with Xander, and then we'll have one last little end-all wrap-up hangout.
Xander:
So I think one of the first things and cool things for me that I do when I experiment with new technologies or new fields, even like during the pandemic, I became a very, very big MMA fan. And just because of all of the intricacies of all the different martial arts styles and how they apply them and the personalities and I was just super interested, knew nothing about fighting. So, I mean, I didn't know the difference between a hook and an uppercut really and a jab or any of these things. So basically what I did was I, and kind of like you guys have mentioned, I found people who do talk about it and I followed them. The things like you said, the things I was hyper interested in, I could find online somebody else who was just as into it, hopefully, and then just immerse myself in those communities and worlds. And then also finding influencers or minds that you can, that resonate with you. If they talk about stuff you like, and they do stuff you like, just follow them. Try to be as active as you can. Don't be afraid to leave a comment and ask a question. Like Daniel said, there was this Steve Jobs quote that kinda, I always will remember it cause it kinda pissed me off in the moment, but then later I got a really good insightful revelation. And he said that like, anytime he wanted something, he would reach out to people and the universe, doors would always open, you know, and he would always have that access and was always gain it. And I was like, well, it's great for you, Mr. Steve Jobs, but I don't have that same network of opportunity. However, you know, this, you know, X amount of years later in this new paradigm, now we do, you know, now we can connect and within two or three clicks, reach out to sometimes it's positive, sometimes not, you never know, but if until you try, but more often than not for me, when I have reached out to find those people or those things that I'm interested in and very passionate about, the doors have always been open. So I would just say, you know, encourage you to stay curious or as Steve Jobs, stay hungry, stay foolish, and just keep reaching out, reaching out for the things you're interested in and chasing it. And yeah, you know, use those people who have failed before you as inspiration. And know that growth usually happens there in that exploration. Um, and interest and just nurture your natural curiosity, you know, wherever that may lead you.
Tery:
That's great.
Daniel:
What is the one thing in the AI space each of you is most excited about, and we can start with Xander.
Xander:
Hey, I am most excited about the relief, but you know, this may be still serving, but the release of my NFT project in Metaverse experience, Let Go, and the AI Journey X1s. I've been working on them for a year, and my goal was to like, start utilizing the tools in a real like recognizable way, because like, I don't know for a hundred percent sure, but I started realizing like, Mid Journey itself and some of these AI tools themselves have a look and a feel. I don't know if the artists can differentiate themselves with their particular style, but maybe they can implement it for a particular use case and be associated with that. So that's what I did with the AI Journey X1s as a way to really give myself a good use case and implement it directly into my creative process. And then at large, it's a part of Let go, which is a metaverse mosaic where you travel, it's a quest based mosaic where you travel through nine worlds, collecting mythical objects to get this ultimate seed phrase that has a really deep meaning, a deep meaning about the nature, a lot of the things of the nature we've been talking about, expression, consciousness, and some of those more abstract concepts. The other part is because the AI journey is one, the use cases is to get people more into our world, you know, an easy gateway. Get them into the tools, get them, get them free merchandise and let them come have positive group experiences in our industry. Sometimes there can be waves where the press negatively portrays us and for good reason sometimes some of the characters in there is like you said like not all of us are in it A for the trendiness or B just for the money grab. So there's really genuine people who are interested in the technology and connecting with others and helping others do the same. So, yeah, I encourage all of you to check that out and try to join the community and at the very least play the experience. Keep a lookout for it. It's called Let Go, a metaverse mosaic from Xander Love.
Daniel:
All right, let's go with Kenn.
Kenn:
Okay, well, tough act to follow because Xander has it all in place when it comes to the sound and the music and the theme and the color and the shapes and the rest of it. So what I'm going to do off of my own little metaverse, creating a Ruby Room 2, which is almost completed. And I hope you forgive me for the nerdy name, but I'm calling it the Cathedral of Funk. It's going to be another place to hang out. And what I'm calling part of a rebel metaverse, a place where we can put up tags, a place where we can make the spaces our own instead of just going through... the commercial side of things entirely. And the other thing I'm excited about is creating the League of Extraordinary Talent for neurodiverse individuals to rebrand ourselves, to represent ourselves differently as superheroes in our own milieu. So that's going to be a great creative endeavor as well. It's all based on a comic book look. So that's going to be impossible to ignore.
Daniel:
Alright, and Tery, how about you?
Tery:
Oh my God, that's a loaded of questions. I have so many projects going on, but I'm gonna stick with, I wanna learn the music AI. So like I said, I'm working on the music ML. So I did a few little things and pushed out there using music LM and Deforum. So Deforum gives me the video. So I wanna explore that more so I could create my own content from that, but the thing that I'm hoping for in a little bit of blue sky for the future, and since we're all loving science fiction, I really do want my own AI assistant, but not like the assistant kind of thing. I want this like companion that, you know, I could have these deep discussions with, that I could trust to not be a gofer, but like bring me back like some things that are, you know, deep and open my mind some more. So I'm hoping that's where the future takes us in terms of AI and creativity and our own imagination, which is so incredible. So thank you so much for having me on this. This is so much fun.
Daniel:
So much fun. And I'll end it with kind of the same note. My biggest excitement and the thing I've been working on now for four and a half, five years, is the blossoming field of 3D generative AI, which is now doing some amazing… we're finally there, guys. It's been so long. I was doing this stuff, you should see my independent study for 2018 at Pratt was literally, making an AI generative plugin for 3D assets for Unity, which I did not do. And it's still not done by Nvidia, by no one's done it. So it don't, you know, I got in over my head as a, but we're so close, get 3D. There's so many assortment of blender plugins. If you're at all interested in the 3D space in terms of asset rendering, making fun 3D printable objects. I've got a fun example. Look at that. That's an AI generated chair made in mid journey, processed through KDEM and then printed on a 3D printer. So like, yes, it doesn't look really comfortable if you sit on that, it won't be very nice, right? But the premise is we're at the point where I can get a file out of the AI that I can give to my printer with minimal, this one required a little bit, minimal human input. to get just printed. I can just get it. Like if I sent this to a fabricator, they would just make it. You know, it's not, there's no additional steps really other than the workflow that you set up for 3D and that's getting easier and easier. So that's what I'm most excited about. I think the idea of metaverse spaces populated entirely by unique individual input is the future of immersion. And then it's the future of personalized immersion. It's the future of personalized storytelling, which is like... Like obviously, Kerry knows, obviously, Kenn and Sandra know, like it's storytelling, even though we're in tech, even though we're using AI, that's why we're here. That's why we do art, that's why we use technology, it's to tell our stories. It's just the best tool that we found to tell that story. I sure wish I could paint, but I can't. So I guess I'm gonna use AI. And with that, we're done, guys. That was pretty awesome.
Tery:
That was so great.
Kenn:
Yeah!