tag:blogger.com,1999:blog-12189014.post4568237511980807009..comments2024-03-28T18:17:09.618-04:00Comments on ILLUSTRATION ART: DO AI ARTISTS DREAM OF ELECTRIC SHEEP?David Apatoffhttp://www.blogger.com/profile/11293486149879229016noreply@blogger.comBlogger163125tag:blogger.com,1999:blog-12189014.post-70717431345560987252023-01-11T21:36:22.584-05:002023-01-11T21:36:22.584-05:00It's going to be funny an AI writing a post wi...It's going to be funny an AI writing a post with blood in its eyes about AI art not being art like: I am an AI who hates what humans produce with others "me"Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-72950947914802106602023-01-11T21:31:43.858-05:002023-01-11T21:31:43.858-05:00I've never seen such a lucid comment on the su...I've never seen such a lucid comment on the subject, AI art by far is bad, it's just that a system where artists have the possibility to help in its development does not exist, there is a lack of more interesting management for this tool.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-54100279125848761742023-01-11T21:25:04.291-05:002023-01-11T21:25:04.291-05:00One of my favorite things about AI art is the chao...One of my favorite things about AI art is the chaos it makes possible, I'd say it's the closest I've seen to Cosmic Horror described in the pages of HP Lovecraft, only with a hint of Zdislaw Bekinski if the AI evolves too much and does not have that possibility of creating bizarre things I will be very sad.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-89532492065255932822022-10-22T18:57:10.343-04:002022-10-22T18:57:10.343-04:00It's been longstanding and acceptable among le...It's been longstanding and acceptable among legal and ethical systems for living artists to knockoff other living Artists. There are thousands of Frank Frazetta, Jack Kirby or Frank Miller copy-cats.<br /><br />Hundreds of human artists have created subpar copies of Master Kim's work since he passed away, myself included. People don't tend to view this negatively - I haven't been called a ghoul for it nor have I heard anyone else be referred to as such. It is hard for me to see why there would be an ethically significant difference between silicon copycats and those composed of synapses and axons.<br /><br />The guy who trained the Kim Jung Gi corpus is being bombarded with crazy accusations. If a person made the same pictures, no one would have noticed. If said human copycat claimed "Look, I can make new Kim Jung Gi pictures.", people would have replied "Lol, no". That would have been the end of it. This whole thing feels distinctly like our weekly exhausting social justice warrior campaign to me. Brigading twitter shit is the basest form of human interaction.<br /><br />What's much more interesting to me is how cartooning continues to evade the ability of AI even now, while classical painting, hyperrealism, CGIish work, and photography do not. Usually, the more abstract the cartooning, the greater the failure rate. To me, this illustrates that cartooning is more innately human than other visual arts, it is more intrinsically poetic - something I've long suspected but can now measure tangibly as AI manages to replace everything but the toonz.<br /><br />(The comment you included from some redditor is ... a comment from some redditor. Am I supposed to comment on some random redditor? I don't have a reddit account for a reason.)Richardnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-88376354443914650732022-10-22T18:56:50.315-04:002022-10-22T18:56:50.315-04:00It's been longstanding and acceptable among le...It's been longstanding and acceptable among legal and ethical systems for living artists to knockoff other living Artists. There are thousands of Frank Frazetta, Jack Kirby or Frank Miller copy-cats.<br /><br />Hundreds of human artists have created subpar copies of Master Kim's work since he passed away, myself included. People don't tend to view this negatively - I haven't been called a ghoul for it nor have I heard anyone else be referred to as such. It is hard for me to see why there would be an ethically significant difference between silicon copycats and those composed of synapses and axons.<br /><br />The guy who trained the Kim Jung Gi corpus is being bombarded with crazy accusations. If a person made the same pictures, no one would have noticed. If said human copycat claimed "Look, I can make new Kim Jung Gi pictures.", people would have replied "Lol, no". That would have been the end of it. This whole thing feels distinctly like our weekly exhausting social justice warrior campaign to me. Brigading twitter shit is the basest form of human interaction.<br /><br />What's much more interesting to me is how cartooning continues to evade the ability of AI even now, while classical painting, hyperrealism, CGIish work, and photography do not. Usually, the more abstract the cartooning, the greater the failure rate. To me, this illustrates that cartooning is more innately human than other visual arts, it is more intrinsically poetic - something I've long suspected but can now measure tangibly as AI manages to replace everything but the toonz.<br /><br />(The comment you included from some redditor is ... a comment from some redditor. Am I supposed to comment on some random redditor? I don't have a reddit account for a reason.)Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-10465323339617948602022-10-21T00:40:49.981-04:002022-10-21T00:40:49.981-04:00Kev Ferrara-- My experience is, the smaller the mi...Kev Ferrara-- My experience is, the smaller the mind, the quicker they are to resort to that kind of intemperate reaction.David Apatoffhttps://www.blogger.com/profile/11293486149879229016noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-76756532448584392102022-10-09T12:18:32.691-04:002022-10-09T12:18:32.691-04:00Making art was just "playing" all this t...Making art was just "playing" all this time. That's a sentiment I'm seeing regularly among AI advocates now. Apparently there has been some hidden disdain or resentment for artists all this time, particularly ones who get paid. Chris Jameshttps://www.blogger.com/profile/11931414857801867456noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-23412844072625541312022-10-08T15:43:37.482-04:002022-10-08T15:43:37.482-04:00Not sure if you're still looking in on this th...Not sure if you're still looking in on this thread Richard, but since you'd advocated for both AI art and made a comment about the tragic death of the great Kim Jung Gi...<br /><br />A artist friend has flagged up a Reddit comment regarding, upon KJG's death, one of the AI 'art' engines being tasked to create "New Kim Jung Gi" artworks. The following reply was in response to the claim that such a usage was, among other things, ghoulish.<br /><br /><i>"I fail to see the issue here. You can't copyright a style, and now people can continue enjoying Kim's work. Hell, now everyone can basically get custom commissions from him for completely free. I'm sorry that everyone can now draw as well as Kim could, I guess. His legacy has been immortalized and you're trying to suck this beautiful homage into your butthurt AI whinefest because you can't get likes on twitter just for playing with pencils anymore. Get over yourself."</i><br /><br />I thought this was a strong example of how quickly a technology's inherent philosophy can inhabit the mind of one of its users. kev ferrarahttps://www.blogger.com/profile/09509572970616136990noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-83799366707128746412022-10-05T21:08:58.051-04:002022-10-05T21:08:58.051-04:00If there is a correlation for the statement that &...<b>If there is a correlation for the statement that "old black men who read superhero comic books are more likely to be dependable employees", its unlikely there is a "reason" for the correlation</b><br /><br />White culture views superheroes with a sort of distanced amusement, but there is an earnestness and directness to older black culture as it approaches superheroes.<br /><br />They don't view it through the distortions of post-modernity, which make Superman seem silly or even grotesque. Instead, the heroes can still be pinnacles of human behavior which are to be imitated.<br /><br />And so there is a common thread that old black dudes who imitate superheroes are moral to a fault, in a way that becomes quasi-religious.<br /><br />If you haven't experienced it, maybe it's geographical. I was trying to pick cultural correlations that would be fairly universal.Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-62847686171681911742022-10-05T17:57:07.024-04:002022-10-05T17:57:07.024-04:00Correlations are somewhat perception-deep, which m...Correlations are somewhat perception-deep, which means that they can be interesting, provocative, useful and perhaps even fruitful, but just as often suggest what is doubtful. Nature (whatever that is) wanted our perceptions to seek correlations and connections, even the dubious ones, but forgot to build in a natural skepticism as strong as the perception. We believe quicker than we doubt. <br /><br />If there is a correlation for the statement that "old black men who read superhero comic books are more likely to be dependable employees", its unlikely there is a "reason" for the correlation. One could not create a research protocol in good conscience with that as a hypothesis meeting the requirements of sound research design, although you'd certainly be free to do do under the 1st amendment. <br /><br />I agree with you that the researchers are probably finding more than "dust bunnies". Anyway, I hope so. Wesnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-88608208461652615002022-10-05T17:11:42.213-04:002022-10-05T17:11:42.213-04:00Correlations are real. Look, I'm doing a PhD i...Correlations are real. Look, I'm doing a PhD in engineering and I use Machine Learning and statistics. Nothing fancy, just MLPs, Reinforcement Learning and some judicious application of Bayes Law. But it's enough to know that they are highly technical and full of subtleties, and that most pop-sci and intuitive interpretations are fundamentally flawed. Learned it the hard way.<br /><br />I also teach, and this confirmed me that you do not ever judge an individual by its his or her background. Never ever. Not only because it is an epistemological misuse of Statistics, or that it causes perverse positive feedbacks, or that it's academic misbehavior, but because of basic knowledge of human nature. I learned it somewhere in the early teens through the method of having friends.<br /><br />xopxehttps://www.blogger.com/profile/14304288015305097195noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-61493267248999543102022-10-05T15:35:32.500-04:002022-10-05T15:35:32.500-04:00David, how'sabout covering Kim Jung Gi now?David, how'sabout covering Kim Jung Gi now?Richardnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-7027414960411844822022-10-05T15:00:26.139-04:002022-10-05T15:00:26.139-04:00Gentlemen, I applaud how cutting-edge you are on t...Gentlemen, I applaud how cutting-edge you are on the science. I was under the quaint impression that there are endless multi-dimensional correlations between a person's hobbies, gender, obsessions, culture, sexual orientation, birthplace, upbringing, and personality on their job performance. If you tell me that the statisticians are digging for those relationships and finding only dust bunnies, gosh, I wouldn't want to be so old-fashioned.Richardnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-15297295118784712072022-10-05T14:09:28.737-04:002022-10-05T14:09:28.737-04:00"Wow. It's phrenology all over again.&quo..."Wow. It's phrenology all over again."<br /><br />Well said, 'nuff said!<br /><br />Wesnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-70714013733045842012022-10-05T13:06:12.399-04:002022-10-05T13:06:12.399-04:00Wow. It's phrenology all over again.Wow. It's phrenology all over again.xopxehttps://www.blogger.com/profile/14304288015305097195noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-79398287649480394382022-10-05T11:55:56.897-04:002022-10-05T11:55:56.897-04:00Addendum -- You might say "Well, let's de...Addendum -- You might say "Well, let's devise a system to still look for the other stereotypes, while removing the ethnic or gender element from the calculus, to avoid sexism or racism."<br /><br />That does not work. It is regularly in conjunction with a person's demographic profile that those stereotypes have predictive power. <br /><br />Young women who play First-Person Shooter videogames are, for whatever reason, disproportionately excellent programmers. You can not extrapolate that to mean that young men who play FPS videogames are disproportionately excellent programmers, in that case the opposite is true.<br /><br />For whatever reason, old black men who read superhero comicbooks are more likely to be dependable employees. You can not extrapolate that in any demographic a person who reads superhero comicbooks is disproportionately trustworthy.<br /><br />Gay guys who went to Southern States Conference schools have significantly superior language skills. That does not imply that Southern States Conference school attendance predicts language skill.<br /><br />You need that demographic information to help develop a functioning stereotype to predict for quality.Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-85614400629650559652022-10-05T11:19:46.227-04:002022-10-05T11:19:46.227-04:00> And even if I came from a group where most pe...> And even if I came from a group where most people suck at something I expect to be evaluated on my own merits.<br /><br />There is no such thing as assessing someone on their own merit from a CV. The function of that process is to develop a mental model of the candidate. A mental model requires stereotypes and biases to function as heuristics for quality. Even someone's work history, education, and hobbies fail to provide definitive insight into the inherent qualities of the candidate. If we only ever judged people by their own personal merits, then we would have to do away with resume reviews all together.<br /><br />The same is true of traditional interviews. When you interview someone, you're attempting to form an opinion of "What kind of person they are". In other words, you're trying to fit them into a box based on how they interact with you. If we wanted to judge candidates solely on their merits, we wouldn't use interviews at all either.<br /><br />There is, statistically speaking, an ex-Con highschool dropout in the United States who would be a better coder than everyone at my small firm if given the chance. But no human interviewer will find them through a traditional hiring practice, because that's not how human knowledge works.<br /><br />In contrast, AI has a higher success rate of finding these needles in haystacks. A person is only able to identify a few types of programmer stereotypes - for example, Star Trek nerd with engineering degree is an okay coder, Mathy guy is a good but slow programmer, large stackoverflow presence guy is a mediocre programmer but good at QA and code review, indian guy who went to MIT and spent six years in a google labs internship is a great programmer but he will job hop after 1 year, girl who went to Reid College and code academy is a bad coder, etc.<br /><br />An AI with enough data could identify correlations of quality that no human would ever detect. It may discover that Laotian women who grew up in Canada and spent her youth baking are disproportionately great programmers, or Estonian guys in black metal bands are disproportionately great programmers.<br /><br />When a recruitment AI makes determinations based on bias, it is operating as designed.Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-13077949800186763222022-10-05T11:15:23.674-04:002022-10-05T11:15:23.674-04:00This comment has been removed by the author.Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-64303509496484678152022-10-05T01:00:38.061-04:002022-10-05T01:00:38.061-04:00The Amazon recruiting AI was trained on yearly per...<i>The Amazon recruiting AI was trained on yearly performance review scores. There's another less popular hypothesis there that also fits the data.</i><br /><br />That women get worse performance reviews? Say it, it pretty easy to check, just throw an avg on a spreadsheet. And then check the distribution by sex and age of the people doing the reviews, and then perhaps we will learn something (as Amazon did the convoluted way).<br /><br />And even if I came from a group where most people suck at something I expect to be evaluated on my own merits. It feels weird having to explain this when speaking of what bias means.<br /><br />It all ends in perverse control loops, where decisions are made that only reinforce the stuff that is broken. Women have trouble performing in a male dominated industry -> Let's hire less women -> women conclude this is a shitty profession to pursue -> self fulfilling prophesy.<br /><br />At some point we will unknowingly train our AI with decisions made by another AI and nobody will be able to say what the fuck we are doing.<br />xopxehttps://www.blogger.com/profile/14304288015305097195noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-51314174294621401842022-10-05T00:02:12.086-04:002022-10-05T00:02:12.086-04:00> IRS pseudonymized all test and training tax r...> IRS pseudonymized all test and training tax returns. The numbers were real, the names and SSNs were fake.<br /><br />Left off that they were pre-selected/known fraudulent.Richardnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-50774324023259848992022-10-04T23:56:11.816-04:002022-10-04T23:56:11.816-04:00The system will be very good at finding the frauds...<b>The system will be very good at finding the frauds that you already catch, and very bad with the ones that are fooling you. Then, there will be several effects. First, people that are getting away currently will do it even easier. </b><br /><br />The government is barely staffed to accomplish the tasks that are currently assigned to it. The positions are brutal to fill because General Schedule (GS) pay scales don't pay competitively with the private market. IRS is starved for accountants to look for fraud, and the problem is getting worse every year. Small private accounting firms are now paying multiples of what you can expect with the same background in government. <br /><br />By weeding out the easy fraudulent returns with AI, it allows more time to look for harder cases of fraud with your limited workforce. In the IRS case, AI is not just a good answer, barring Herman Cain's 9-9-9 plan, it might be the only answer.<br /><br /><b>Then, think *whom* you are catching (or who you don't). To keep it simple, imagine you catching more from a population just because you are checking them more intensely (the unbalanced dataset problem). </b><br /><br />Even if your AI is only good at catching criminals from a particular demographic, it's still functioning to catch criminals. We shouldn't let those guys go just because it's unfair that their backgrounds make them easier to catch.<br /><br />By more effectively catching those bad guys, you free up resources to catch other bad guys. Or girls! Perhaps there's a hidden epidemic of female-led gang violence going on that we aren't aware of, because we're too busy catching all the dudes.<br /><br /><br /><br />The Amazon recruiting AI was trained on yearly performance review scores. There's another less popular hypothesis there that also fits the data.<br /><br /><b>I would be interested in the results of the test. Just how often would the high CI translate into a real world fraudulent return? (I presume not 100%.)</b><br /><br />The FPR depends on where you set the cutoff, but at high thresholds it's fairly low. So, if you want to be extra certain and only allow things that are 95% confident or above, then you'll miss out on a lot of fraud. Alternatively, if you're okay with a 75% chance or higher, then be prepared to sift through many more false positives. Also worth noting, the higher the threshold, the less interesting examples of fraud you will detect. Don't think I can be more specific than that.<br /><br /><b>were the tested tax returns a pre-selected previously-processed set where it was known that some were fraudulent?</b><br /><br />IRS pseudonymized all test and training tax returns. The numbers were real, the names and SSNs were fake.Richardnoreply@blogger.comtag:blogger.com,1999:blog-12189014.post-24354644014409849462022-10-04T19:39:55.684-04:002022-10-04T19:39:55.684-04:00Richard,
In your specific case it might be so tha...Richard,<br /><br />In your specific case it might be so that AI makes for less biased results because the numerical inputs of tax returns are quite cut and dried. Fraud, or at least egregious error, in such a system is a game-like problem. <br /><br />Also, the AI scientists providing and testing the prototype would have no political motivation, only a financial one. And government accountants tend to be pawns, not players anyway.<br /><br />Having conceded that, we can still query the inputs, question where they came from and how... what it means to chose a reviewer 'at random', or a million tax returns 'at random', which fraudulent returns are chosen and which not... and what is a sufficient sample size, and a sufficient scope of fraudulent returns to properly train the ML algo on. Etc.<br /><br />Another bias might be; because AI is an advanced averaging app of the input set, originality of fraud would more likely escape its notice than common fraud.<br /><br />I would be interested in the results of the test. Just how often would the high CI translate into a real world fraudulent return? (I presume not 100%.)<br /><br />Also, was this was an actual real world test on real tax returns that had not already been processed? Or were the tested tax returns a pre-selected previously-processed set where it was known that some were fraudulent?<br />kev ferrarahttps://www.blogger.com/profile/09509572970616136990noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-19274316760116458852022-10-04T19:24:51.332-04:002022-10-04T19:24:51.332-04:00The bias is in the training dataset. You are not e...The bias is in the training dataset. You are not explaining what a fraud is, you are giving examples. Examples of frauds *you already discovered*. The system will be very good at finding the frauds that you already catch, and very bad with the ones that are fooling you.<br /><br />Then, there will be several effects. First, people that are getting away currently will do it even easier. Because institutions will say "look we have a high tech *ARTIFICIAL INTELLIGENCE*, and the statistics are going up! Everything is beautiful!".<br /><br />Then, think *whom* you are catching (or who you don't). To keep it simple, imagine you catching more from a population just because you are checking them more intensely (the unbalanced dataset problem). Might be on purpose, might be logistics, might be cultural inertia. You might not even realize it. Now, if there's even a slight correlation with that population with some of the attributes you are using in your input vector, the classifier WILL catch it. If the classifier is white box, like a Classification Tree, somebody might rise an eyebrow. "wait, why it is important if he has many brothers and sisters?". If it is black box like a Neural Network, good luck catching that. And as the system will keep screwing people just as they used to be screwed it will look natural. Most will be glad, as they would have their bias whitewashed: see, it is not that we are part of a racist, clasist and misogynistic world, this decision was made by the AI which is SCIENCE.<br /><br />And the unbalanced dataset is like the simplest problem of training a ML system, and it is already very hard to handle. There are other much more subtle. <br /><br />Good examples of bias creeping are Amazon learning that they do not like hiring women:<br />https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G<br /><br />and Microsoft training a chatbot from twitter, which immediately went full genocidal misogynist, because that's what twitter apparently is.<br />xopxehttps://www.blogger.com/profile/14304288015305097195noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-28633137135851482642022-10-04T18:09:43.812-04:002022-10-04T18:09:43.812-04:00I meant 'prompt' in a global way as all th...<b> I meant 'prompt' in a global way as all the ways any AI is controlled via any input. </b><br /><br />I didn't see this before posting, but I think the crux about introducing bias into a solution like this still stands.Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.comtag:blogger.com,1999:blog-12189014.post-58580277397266738042022-10-04T18:07:23.476-04:002022-10-04T18:07:23.476-04:00how to gloss bureaucratic documentation so as to m...<b>how to gloss bureaucratic documentation so as to make just-so errors with built-in plausible deniability.</b><br /><br />I agree that the challenge would be about introducing political biases with built-in plausible deniability. But in most applications of ML in governance, this would not be about "quietly controlling the prompts". In generating Art from natural language, the prompt comes into play because you're using a language vector to predict an image vector. But this is not universal in ML applications.<br /><br />I worked on a prototype with a group of AI companies under the IRS Pilot program, and our methodology looked roughly like this:<br /><br />1. We use OCR to scan a million tax returns, and grab the values from each field (e.g. Return ID, Number of exemptions, wages and salaries, tax exempt interest, ira distributions).<br />2. Those values are put into a vector which looks like:<br />[123456, 2, 35000, 0, 0]<br />[234567, 14, 95000, 0, 3500]<br />...<br /><br />3. In a separate CUI-rated database we capture the PII:<br />Return ID 123456, SSN: 123-45-6789, Name: Bill Smith, etc.<br />Return ID 234567, SSN: 234-56-7890, Name: Makenzie Jackson, etc.<br />...<br /><br />4. We would then use a ML system to predict the chance of fraud based on how close or far away the vector is from known fraudulent return vectors. This CI (confidence interval) would be appended to the end of each vector.:<br />[123456, 2, 35000, 0, 0,0.3]<br />[234567, 1, 95000, 0, 500,0.93]<br />...<br /><br />5. If the CI was higher than some pre-determined value for a return, this would send the data to a random person for review. <br />6. If it is corroborated by the first reviewer, then a second human reviewer will be chosen at random.<br />7. If the AI was corroborated again by a second reviewer, the system would create a case in a Case Management system, link the PII stored in the other database to the Return ID, and an agent would begin working on it.<br /><br />The steps 3, 5, and 6 blinding wasn't just our idea; it was required by the evaluation. If you can tell me where one would sneak in bias before step 7 (when the return is un-blinded to the second reviewer), please let me know because I will include it to our design.<br /><br />This is how many AI problems in government look.<br /><br />In the case of law enforcement, if they've decided to arrest Oath Keepers or January 6 protesters, AI isn't going to matter. However, in most government cases – such as with the IRS example – bias would be reduced with AI implementation.<br /><br />Richardhttps://www.blogger.com/profile/08249577762409684046noreply@blogger.com