All content posted inside the blockchain steem, will appear on google and attract more people to the platform, no matter what kind of content. For that i don't see bots as a problem.
My 2 cents: You don't need an AI that can't identify bot actions. If a bot provides quality content (and they will do at some point), we will appreciate this.
So we would only need to identify bad quality. As you correctly noted this is subjective, so my idea is to integrate into steemit more AI technics that allows a user to find posts that match his understanding of quality.
I did post a week ago about an ML Recommender System that would help users https://steemit.com/utopian-io/@drmake/underrated-yet-another-list-on-steemit to find easier quality content that matches there quality. We have all the data for this on the blockchain. Just use the votes as you personal quality measure. And then with a recommender system find new posts that match your past quality and topic measures.
Who cares about lousy content when the only content you get exposed to you consider as good.
I also agree and have been talking with @eturnerx about repurposing FOSSbot for a task like this. In their own words it's a "coarse parameter search" which could (and perhaps should) then be refined by humanz.
I love your idea of an "Underrated" tab on Steemit. We've been talking about this kind of thing to (and me elsewhere I suggested you should a custom filter tab) and it should be taken off the blockchain and UI middle processing.
In a few weeks I will post some ai notebooks that use Steem Data for such a similarity analysis. Just out on vacation for some time so it has to wait a little
Definitely need more people into AI! Although there does seem to be a number of people in the technical fields, the #AI and #artificialintelligence tags are pretty bare. Even the #science tag is kind of lacking.
I agree with you. Except for certain reward pool abuses, not giving visibility to bad quality content is probably sufficient to make the poster/bot stop. I'm sure that some of the status-report type bot posts are very interesting to the bot owners, so meh, let that stay as long as it is out of sight, out of mind.
Thanks for the link: I like your post. You are correct that we already have sufficient information on the blockchain to build a personalisation AI to filter content. Steem, very luckily, is also extensible enough so that we could make a richer set of user feedback tools if we found that useful. It would take building a new UI to make this happen though. A smaller first goal might be to build tools to help curators.
(Mention: @personz you might be interested in this thread)
Reward pool abuse will be difficult to hide when it gets a large scale. And to work you need a lot of steel power. The accounts doing this will then be flagged quite fast.
I think you have some great ideas here. I too despise all these bots as I personally cannot point to a single bot that has benefited me. They just get in my way. I think your idea of mediating new account applications with a "Human Verification" type plugin should have some merit and needs to be looked into. Anyway, you need to check out @JerryBanfield 's recent post Steem Budget Proposals -whitepaper because you are a good candidate for submitting a "budget proposal" (should this concept be adopted).
Human verification barely works on the level of tools (Steemit) and cannot keep bots off the blockchain. I think bot operators are going to strangle the goose if they don't ensure their bots are adding value to the community. There isn't really any way around it except to economically starve the worst bot operators and appeal the humanity (and long-term ROI) for the other bot runners.
As for usefulness; I like to think my two bots add value to the community. They both give me more time to be human here; one at the cost of some quality-efficiency I'll grant. But, they quietly go about their business without bothering anybody with comment spam.
Thanks for the links to the Budget Proposals. I'll read it when I have some time to digest it properly.
Didn't I read recently that a neural net recently surpassed the accuracy of humans when reading numbers? Pretty soon we'll have to guess who isn't the bot based on who performed statistically similar to a human, rather than better at some task.
There's some awesome work in this area. MNIST numbers is one of those standard datasets used in machine learning. But, recognising a symbol is orders of magnitude simpler than recognising very abstract things like quality.
I programmed my own clerk bot and it runs on a Raspberry PI clone sitting on my desk. The other bot is a FOSSBot running on Heroku: I occasionally look at the votes and tweak the settings.
A clerk is like a book-keeper or accountant. I used steemjs to make a program that periodically checks my accounts and performs various "move the money about" actions.
FOSSBot is an automated upvote bot. You give it the criteria and it goes looking for matching posts and then it upvotes them. You can run FOOSBot on a free account on the cloud service provider Heroku.
There are those who follow the rules, and then there's those who want to DESPARTELY claim notarily, and become famous BY ALL MEANS NECESSARY.
That is where ,"bad decisions" run rampant, and will cloud one's judgement as they feel they deserve MUCH MORE for their effots, when it's suppose to be about earning one's weight in gold, so like anything in life will take hard work putting the time in to accomplish great things
(and that means NOT RELYING ON THE "BOTS" TO DO ANY HEAVY LIFTING, but rather using physical "keystoke" efforts to dish out important valuable information I'm a FIRM BELIEVER in that.)
So it pisses me off to see a great platform like Steemit already falling prey to these goofy AI bots, "true" steemians take time to provide rich content, NOT FAKE AI GARBAGE.
I like how you've bolded the word rules. There aren't really many rules about content and growth except as a social negotiation. Some people feel some frustration that work judged as quality in other communities does not automatically find traction here. My reasoning is that the poster has not found/created their community on Steemit and is instead expecting the software to do that for them where it cannot.
For my own part, I have stopped most upvoting of template/bot posts. I'd prefer we had some tag that bots posted under so we could choose to filter them out much more easily. If we developed such a consensus then we could enforce it by downvoting bot posts that don't use the tag.
I just found out about the "don't logout on your ANDRIOD PHONE or you'll get kicked out of your account perminately glitch" yeah to contact Steemit Discord they have to find a way to log me back into it :(
Ok I did post a couple of content items, they showed up on the site's HTML page (my page to be exact), for some odd reason I can't post my ebook Simple Ideologies of Health (40 Food & Drink Fact Depiction).
I'm baffled there.. Why all of it won't show up in a post.
Starting to think it could be a certain amount of content information on post can handle.
Nevertheless, I starting on that piece "The Social Narrative & Entitlement"(had to put the --> "The" in front to give it a slight better ring to the article name LOL.)
I might be starting a new account, I'm trying to get into my account but without any success. And I want to give you that story I had in mind about Social Narrative & Entitlement.
It's a "badass analysis" about the destruction socialism brings to our doorstep everyday, and how multitudes of narratives continually demands us to bend to their will, taking over our rights to choose, how we feel, how we perceive situations we witness every waking moment of our lives.
But first is the account sign up (starting back on the drawing board) for a new account because of that stupid >> **- ** << between my user name.. jaye-irons. I still can comment however (ironically using my android phone, that very same one that caused this mess in the first place LOL). :D
I'm half done with the article and will have the rest done shortly. LOL no I didn't forget about what I promised, I try my best to deliver, BTW hope you're having a great day!! :D
I might be able to get that article through I just did one a short while ago, and it ACTUALLY WENT THROUGH, so now I'm about to start on the one we talked about ---> "The Social Narrative & Entitlement".
I'm going to start designing a cover for the top section of the post, have a great day, I'll be finishing that up soon and will drop the link off in a comment. :D
Some serve a purpose. People follow and upvote what they like, which then gets a wider audience. If a bot is useful, it will get upvotes. Otherwise people will grow tired of it.
i think what you are proposing is rational. curation as a human endeavor is a time consuming task, with little payback. and while I don't like the overuse of bots, at least they are free of judgement. why is this a good thing? because i feel human judgement is biased. we tend to value entertainment at any level higher than intellectual thought. this has to do with our learned cultural emphasis and reliance on social media. our ideas about quality have become debased because our attention spans can no longer linger for a five minute article unless there is an accompanying video with graphics. good writing is taken for granted. so if there is no best judge of what quality is, we might as well delegate proof of input to a bot.
Bots are free of judgement: but quality is 100% about judgement. I think that bots can help support humans to make those judgement calls, but cannot replace them. I guess, we have to find/build the community that shares our idea on what quality is and then work within that. Keep on having the conversations.
If you want long-form content then it's easy enough to build a bot that filters out anything too short - and then you can exercise judgement over what the bot presents to you. Whatever your ideas on quality we can give broad parameters to bots but still have to make human judgements.
Proof of input is the death of steem because bots can input faster than humans.
If you want long-form content then it's easy enough to build a bot that filters out anything too short
Often, though, a skilled writer can convey more in fewer words than an unskilled writer. And I've seen long posts on Steemit that offer nothing valuable, so I hope length doesn't become a determining factor. And this comment probably only serves to prove your point that, ultimately, human decision-making is needed.
There's also the "frequency" of how writers use words. Bots can only estimate what should be said to an extinct, but it takes an intellect with an actual human brain, to precisely layout what needs to be said rather than what's irrelevant.
I wrote a long piece about writing / structuring concepts, it was fun, it was brutal putting it together, but I believe it gives the writer value and concept ideas they maybe able to apply to their writing style.
It basically comes down to the reader, what he or she decides to do with the acquired info.
That was the article I created, which talked about various degree levels when searching for info one wants to apply to an eBook creation, story-telling analysis, or a simple how to do write up.
And guess what, it NEVER garnered a dime. But the information there is powerful, and very intriguing..
False negatives are always going to be a problem when we use metrics that indicate but do not directly measure something. But, in the case of AI Assistants that control visibility for a human, a false negative will not be seen while a false positive will be. It's a tricky balance to get right - if the parameters are too coarse then there'll be too many false positives: too tight and false negatives increase.
i think that conversation on what constitutes quality needs to happen. its one of the many conversations that are being left out of school, like the one of critical thinking and writing, and how to divorce capitalism from out thought processes.
Exactly! We, the community members, need to keep talking about quality and what that means for us. The rewards system might be AnCap in its operation but it is not the only measure of value and influence on Steem. Money/STEEM is never meant to be a measure of all value ever and so our conversations really do matter. They inform the creators how their creations will be valued.
Though, I would say that any discussion on quality is not separable from a discussion on culture and that culture's embedded values.
I tend to swing towards live-and-let-live pluralism. And this means I support different communities having their own mutually incompatible versions of quality provided they don't impose their hegemonies on anybody else. Cyberspace has infinite room for them all.
The word Intelligence means, "reading between the lines".
You read my reply with human cleverness.
Human cleverness is a product of thought - memory - knowledge - experience - language.
You read my reply with human intelligence "reading between the lines" to grasp the meaning.
Human intelligence is not the product of thought - memory - knowledge - experience - language.
Human cleverness (thought - memory - knowledge - experience - language) is measurable.
Human intelligence is not measurable.
Therefore the cessation of human cleverness is the awakening of human intelligence.
Therefore there is no Artificial Intelligence only Artificial Cleverness based on Human Cleverness.
PS: Read between the lines, and feel it, in your mind, in your blood, with your whole body!
Are you a philosophical Idealist who believes in the infinite creativity of the humans mind? That humans have agency but an AI is only ever an agent of its programmer? Probably like William Dembski, that AI isn't anything other than the inherited tricks of its programmers and trainers? Or is this a more Zen thing?
Maybe you've read Nagel's work on the experience of being a bat.
It's a fun topic.
Off topic here, but CAST is one of my favourite topics. CAST is one reason why I think co-existing dumb AIs are a bigger risk than the singularity. CAST is also how I look at information flows around parts of a complex system - there are particular patterns that helps you segment the system and thus understand it more easily. For example, thinking about a database diagram as a CAST meant it was easy to quickly derive rules for where to look for the conceptual objects first time I looked at a diagram and then point out where problems might occur.
It's an extremely helpful conceptual tool. Equilibriums, dampening effects, amplifying effects - all good stuff.
Ray is one of my heroes. I tend towards cautiously thinking he is probably correct. He is at least correct enough that we should try to create the minds as he says and see where that leads us. The journey will teach us a lot even though I don't think the destination will be where we think.
There are some big research hurdles to even contemplate what Ray is proposing though. We're only just starting to work with AI that feeds back into itself (e.g. RNNs) previously our neural networks were sense-responses machine that worked in one direction only. And, we have engineering problems dealing with things at the scales needed to pull off Ray's brain simulation. But yeah, let's give it a shot.
I think this requires a thorough debate on what it is to be human; not just the limitations but also the extremes. What are the limits of human experience?
Just to bring one example, if one assumes the brain is just a neural net, then one can justify that one can build a better artificial neural net. But the brain is more than point-to-point interactions at the synapses, there are electromagnetic fields that communicate between neurons. (What are EEGs otherwise?) This implies that the geometry of the brain is as much part of its function as its electro-bio-chemistry. But that means that to build a better brain means building... a better brain!
This is just one example, but in essence, to assume limitations on human abilities because one has a preconceived model that builds in such limitations is not in itself proof that being able to surpass that model is the same as being "better" than human.
I think we know too little to comfortably (ie based on evidence) put to rest the argument of how a mind functions. I do take an emergent perspective on intelligence and that colours what I think can ground a mind. The brain can ground a mind and a brain will have physical limitations. To what extent those the physical limitations of the brain are also limitations on the mind is not known.
But, let's just say we accept the brain is some form of neural net - the reality is we're not even close to building a neural net that matches the scale of the brain. That's even before going into how we train that neural net. In a practical sense, the level of sophistication is not there yet.
I'm not certain that's true. Sure, we don't have a general AI that can act like a human, but there has been astounding development in AI in the last few years. Much of it is due to increased access to data, processing, and pre-built libraries. Many useful techniques were created back in the 80's, and have only become apparent how useful recently.
General intelligence might be far off, but current AI is proving very useful and effective, and even out performing humans in some tasks, due to dedication, and access to data. Humans don't sit around looking at millions of MRI's until they can differentiate the ones that have cancer with a high percentage. They work on far fewer examples.
Astounding progress for sure. Though identifying things like cancer has a proveable measurement that they can work from and the AI is identifying the presence of real objects. Quality, a cultural construct, is not fixed which makes the predictive value of an AI that learned from historic data not so useful.
It's that ability to learn from a few examples that leads the Idealists to think that humans do something quite special.
Here's an AI that is good at judging creativity with the benefit of a ton of hindsight.
It doesn't have to be perfect though. The question is, if given enough data on people's opinions of what quality is, could it differentiate quality enough to be useful? I'm not certain that's not a yes. Perhaps there are quite a few cases where it would fail. Could it bring some quality to the top though? Lets say it was built as an upvote bot, with 70% accuracy of what a statistically significant portion of the audience thinks are quality articles. Would that not be helpful?
Not saying it would be worth the time to build, but it would probably be helpful. Also likely a fun project to play around with.
Such a bot would be very useful for a particular community to have. Provided that bot is not the only means for assigning rewards within the community then it's false-positives will not be so bad. Particularly if the bot kept on learning what the community liked.
Where I to produce a user interface for steem - this would be my point of difference. That there was AIs that learned what individuals would probably like to see. You can do that off very broad metrics and by observing user behaviour. But, this is an AI at the level of the tools and it acts as an assistant to users.
However, if the bot had way too much SP then it becomes economically attractive for bad actors to learn how to fool the bot. So, yeah, I'd rather tune the bot to learn individual preferences to get around that.
Some commentators think that Searle Strong AI or AGI is not actually attainable. I don't share their views but I do think that AGI requires a level of sophistication that AI is not yet capable of. We are safe for now.
All content posted inside the blockchain steem, will appear on google and attract more people to the platform, no matter what kind of content. For that i don't see bots as a problem.
Google quite aggressively deprioritises sites full of what it deems useless content.
Nice to see some more AI expertise here on steem.
My 2 cents: You don't need an AI that can't identify bot actions. If a bot provides quality content (and they will do at some point), we will appreciate this.
So we would only need to identify bad quality. As you correctly noted this is subjective, so my idea is to integrate into steemit more AI technics that allows a user to find posts that match his understanding of quality.
I did post a week ago about an ML Recommender System that would help users https://steemit.com/utopian-io/@drmake/underrated-yet-another-list-on-steemit to find easier quality content that matches there quality. We have all the data for this on the blockchain. Just use the votes as you personal quality measure. And then with a recommender system find new posts that match your past quality and topic measures.
Who cares about lousy content when the only content you get exposed to you consider as good.
I also agree and have been talking with @eturnerx about repurposing FOSSbot for a task like this. In their own words it's a "coarse parameter search" which could (and perhaps should) then be refined by humanz.
I love your idea of an "Underrated" tab on Steemit. We've been talking about this kind of thing to (and me elsewhere I suggested you should a custom filter tab) and it should be taken off the blockchain and UI middle processing.
In a few weeks I will post some ai notebooks that use Steem Data for such a similarity analysis. Just out on vacation for some time so it has to wait a little
Looking forward to it, followed 😊
Definitely need more people into AI! Although there does seem to be a number of people in the technical fields, the #AI and #artificialintelligence tags are pretty bare. Even the #science tag is kind of lacking.
I agree with you. Except for certain reward pool abuses, not giving visibility to bad quality content is probably sufficient to make the poster/bot stop. I'm sure that some of the status-report type bot posts are very interesting to the bot owners, so meh, let that stay as long as it is out of sight, out of mind.
Thanks for the link: I like your post. You are correct that we already have sufficient information on the blockchain to build a personalisation AI to filter content. Steem, very luckily, is also extensible enough so that we could make a richer set of user feedback tools if we found that useful. It would take building a new UI to make this happen though. A smaller first goal might be to build tools to help curators.
(Mention: @personz you might be interested in this thread)
I thank you for alerting me to the folly of reposting articles, i stopped doing that and thanks for the heads up
Reward pool abuse will be difficult to hide when it gets a large scale. And to work you need a lot of steel power. The accounts doing this will then be flagged quite fast.
Glad to see you're posting regularly too. I've never been in 100% agreement with you, but I think you add an important voice to the conversation.
Thanks for staying on it my friend. You are making a difference here. I'm living proof of that. Thank you!
Many thanks. Your work motivates me to think too. It's a big world out there.
I think you have some great ideas here. I too despise all these bots as I personally cannot point to a single bot that has benefited me. They just get in my way. I think your idea of mediating new account applications with a "Human Verification" type plugin should have some merit and needs to be looked into. Anyway, you need to check out @JerryBanfield 's recent post Steem Budget Proposals -whitepaper because you are a good candidate for submitting a "budget proposal" (should this concept be adopted).
Human verification barely works on the level of tools (Steemit) and cannot keep bots off the blockchain. I think bot operators are going to strangle the goose if they don't ensure their bots are adding value to the community. There isn't really any way around it except to economically starve the worst bot operators and appeal the humanity (and long-term ROI) for the other bot runners.
As for usefulness; I like to think my two bots add value to the community. They both give me more time to be human here; one at the cost of some quality-efficiency I'll grant. But, they quietly go about their business without bothering anybody with comment spam.
Thanks for the links to the Budget Proposals. I'll read it when I have some time to digest it properly.
Didn't I read recently that a neural net recently surpassed the accuracy of humans when reading numbers? Pretty soon we'll have to guess who isn't the bot based on who performed statistically similar to a human, rather than better at some task.
There's some awesome work in this area. MNIST numbers is one of those standard datasets used in machine learning. But, recognising a symbol is orders of magnitude simpler than recognising very abstract things like quality.
Yeah, I was only speaking in regards to tests that one might put on posting or registration.
What two Bots do you run?
I programmed my own clerk bot and it runs on a Raspberry PI clone sitting on my desk. The other bot is a FOSSBot running on Heroku: I occasionally look at the votes and tweak the settings.
Ha, I have no idea what those do!
A clerk is like a book-keeper or accountant. I used steemjs to make a program that periodically checks my accounts and performs various "move the money about" actions.
FOSSBot is an automated upvote bot. You give it the criteria and it goes looking for matching posts and then it upvotes them. You can run FOOSBot on a free account on the cloud service provider Heroku.
Thanks for the education!
I think writer should follow your advise because you make some solid judgement .
you must be an incredibly smart guy! building your own bots? whoa.
No smarter than any other learned trade.
The problem is two issues..
There are those who follow the rules, and then there's those who want to DESPARTELY claim notarily, and become famous BY ALL MEANS NECESSARY.
That is where ,"bad decisions" run rampant, and will cloud one's judgement as they feel they deserve MUCH MORE for their effots, when it's suppose to be about earning one's weight in gold, so like anything in life will take hard work putting the time in to accomplish great things
(and that means NOT RELYING ON THE "BOTS" TO DO ANY HEAVY LIFTING, but rather using physical "keystoke" efforts to dish out important valuable information I'm a FIRM BELIEVER in that.)
So it pisses me off to see a great platform like Steemit already falling prey to these goofy AI bots, "true" steemians take time to provide rich content, NOT FAKE AI GARBAGE.
I like how you've bolded the word rules. There aren't really many rules about content and growth except as a social negotiation. Some people feel some frustration that work judged as quality in other communities does not automatically find traction here. My reasoning is that the poster has not found/created their community on Steemit and is instead expecting the software to do that for them where it cannot.
For my own part, I have stopped most upvoting of template/bot posts. I'd prefer we had some tag that bots posted under so we could choose to filter them out much more easily. If we developed such a consensus then we could enforce it by downvoting bot posts that don't use the tag.
I here ya, LOL just wait till I put together this one article I have in mind, it will be about Social Narrative & Entitlement :D
Please send me a link to your article. It sounds very interesting.
I just found out about the "don't logout on your ANDRIOD PHONE or you'll get kicked out of your account perminately glitch" yeah to contact Steemit Discord they have to find a way to log me back into it :(
Good luck!
Ok I did post a couple of content items, they showed up on the site's HTML page (my page to be exact), for some odd reason I can't post my ebook Simple Ideologies of Health (40 Food & Drink Fact Depiction).
I'm baffled there.. Why all of it won't show up in a post.
Starting to think it could be a certain amount of content information on post can handle.
Nevertheless, I starting on that piece "The Social Narrative & Entitlement" (had to put the --> "The" in front to give it a slight better ring to the article name LOL.)
stay tuned! :D
I might be starting a new account, I'm trying to get into my account but without any success. And I want to give you that story I had in mind about Social Narrative & Entitlement.
It's a "badass analysis" about the destruction socialism brings to our doorstep everyday, and how multitudes of narratives continually demands us to bend to their will, taking over our rights to choose, how we feel, how we perceive situations we witness every waking moment of our lives.
But first is the account sign up (starting back on the drawing board) for a new account because of that stupid >> **- ** << between my user name.. jaye-irons. I still can comment however (ironically using my android phone, that very same one that caused this mess in the first place LOL). :D
I'm half done with the article and will have the rest done shortly. LOL no I didn't forget about what I promised, I try my best to deliver, BTW hope you're having a great day!! :D
I'm just trying to figure out how to promote it for more views, seeing that my links are kicking back error messages to me is getting tiresome
but I'll keep trying to find a way to get more views, been thinking I might check up on D-Tube as an option for that.
I might be able to get that article through I just did one a short while ago, and it ACTUALLY WENT THROUGH, so now I'm about to start on the one we talked about ---> "The Social Narrative & Entitlement".
I'm going to start designing a cover for the top section of the post, have a great day, I'll be finishing that up soon and will drop the link off in a comment. :D
Some serve a purpose. People follow and upvote what they like, which then gets a wider audience. If a bot is useful, it will get upvotes. Otherwise people will grow tired of it.
I haven't benefited from one yet! [useful] lol
Comparision that you made obviously correct @eturnerx
i think what you are proposing is rational. curation as a human endeavor is a time consuming task, with little payback. and while I don't like the overuse of bots, at least they are free of judgement. why is this a good thing? because i feel human judgement is biased. we tend to value entertainment at any level higher than intellectual thought. this has to do with our learned cultural emphasis and reliance on social media. our ideas about quality have become debased because our attention spans can no longer linger for a five minute article unless there is an accompanying video with graphics. good writing is taken for granted. so if there is no best judge of what quality is, we might as well delegate proof of input to a bot.
Bots are free of judgement: but quality is 100% about judgement. I think that bots can help support humans to make those judgement calls, but cannot replace them. I guess, we have to find/build the community that shares our idea on what quality is and then work within that. Keep on having the conversations.
If you want long-form content then it's easy enough to build a bot that filters out anything too short - and then you can exercise judgement over what the bot presents to you. Whatever your ideas on quality we can give broad parameters to bots but still have to make human judgements.
Proof of input is the death of steem because bots can input faster than humans.
Often, though, a skilled writer can convey more in fewer words than an unskilled writer. And I've seen long posts on Steemit that offer nothing valuable, so I hope length doesn't become a determining factor. And this comment probably only serves to prove your point that, ultimately, human decision-making is needed.
There's also the "frequency" of how writers use words. Bots can only estimate what should be said to an extinct, but it takes an intellect with an actual human brain, to precisely layout what needs to be said rather than what's irrelevant.
I wrote a long piece about writing / structuring concepts, it was fun, it was brutal putting it together, but I believe it gives the writer value and concept ideas they maybe able to apply to their writing style.
It basically comes down to the reader, what he or she decides to do with the acquired info.
Either way.. (it's powerful s--t!!) LOL.
https://steemit.com/steem/@jaye-irons/steemit-project-3rd-4th-and-5th-degree-fact-level-search-and-apply-concepts-jaye-s-way
That was the article I created, which talked about various degree levels when searching for info one wants to apply to an eBook creation, story-telling analysis, or a simple how to do write up.
And guess what, it NEVER garnered a dime. But the information there is powerful, and very intriguing..
False negatives are always going to be a problem when we use metrics that indicate but do not directly measure something. But, in the case of AI Assistants that control visibility for a human, a false negative will not be seen while a false positive will be. It's a tricky balance to get right - if the parameters are too coarse then there'll be too many false positives: too tight and false negatives increase.
i think that conversation on what constitutes quality needs to happen. its one of the many conversations that are being left out of school, like the one of critical thinking and writing, and how to divorce capitalism from out thought processes.
Exactly! We, the community members, need to keep talking about quality and what that means for us. The rewards system might be AnCap in its operation but it is not the only measure of value and influence on Steem. Money/STEEM is never meant to be a measure of all value ever and so our conversations really do matter. They inform the creators how their creations will be valued.
Though, I would say that any discussion on quality is not separable from a discussion on culture and that culture's embedded values.
I tend to swing towards live-and-let-live pluralism. And this means I support different communities having their own mutually incompatible versions of quality provided they don't impose their hegemonies on anybody else. Cyberspace has infinite room for them all.
what is human intelligence and what is human cleverness?
Don't you think that there is a vast difference?
A human being is an intelligent human being, not a clever person. HINT
Both of those terms have different definitions depending on the context of use. Would you care to elaborate on what you mean?
The word Intelligence means, "reading between the lines".
You read my reply with human cleverness.
Human cleverness is a product of thought - memory - knowledge - experience - language.
You read my reply with human intelligence "reading between the lines" to grasp the meaning.
Human intelligence is not the product of thought - memory - knowledge - experience - language.
Human cleverness (thought - memory - knowledge - experience - language) is measurable.
Human intelligence is not measurable.
Therefore the cessation of human cleverness is the awakening of human intelligence.
Therefore there is no Artificial Intelligence only Artificial Cleverness based on Human Cleverness.
PS: Read between the lines, and feel it, in your mind, in your blood, with your whole body!
Are you a philosophical Idealist who believes in the infinite creativity of the humans mind? That humans have agency but an AI is only ever an agent of its programmer? Probably like William Dembski, that AI isn't anything other than the inherited tricks of its programmers and trainers? Or is this a more Zen thing?
Maybe you've read Nagel's work on the experience of being a bat.
It's a fun topic.
I am an israeli hacker educated into complex adaptive systems, cybernetics.
It's fun to read this reply from a clever person. HINT
You should try Ray Kurzweil : How to Create a Mind.
Off topic here, but CAST is one of my favourite topics. CAST is one reason why I think co-existing dumb AIs are a bigger risk than the singularity. CAST is also how I look at information flows around parts of a complex system - there are particular patterns that helps you segment the system and thus understand it more easily. For example, thinking about a database diagram as a CAST meant it was easy to quickly derive rules for where to look for the conceptual objects first time I looked at a diagram and then point out where problems might occur.
It's an extremely helpful conceptual tool. Equilibriums, dampening effects, amplifying effects - all good stuff.
Dumb AI's are indeed a bigger risk, I call dumb AI's the beast system. :)
Once the human becomes the ghost in the machine, it's over for the human.
(((They))) salivate over this.
Ray is one of my heroes. I tend towards cautiously thinking he is probably correct. He is at least correct enough that we should try to create the minds as he says and see where that leads us. The journey will teach us a lot even though I don't think the destination will be where we think.
There are some big research hurdles to even contemplate what Ray is proposing though. We're only just starting to work with AI that feeds back into itself (e.g. RNNs) previously our neural networks were sense-responses machine that worked in one direction only. And, we have engineering problems dealing with things at the scales needed to pull off Ray's brain simulation. But yeah, let's give it a shot.
I think this requires a thorough debate on what it is to be human; not just the limitations but also the extremes. What are the limits of human experience?
Just to bring one example, if one assumes the brain is just a neural net, then one can justify that one can build a better artificial neural net. But the brain is more than point-to-point interactions at the synapses, there are electromagnetic fields that communicate between neurons. (What are EEGs otherwise?) This implies that the geometry of the brain is as much part of its function as its electro-bio-chemistry. But that means that to build a better brain means building... a better brain!
This is just one example, but in essence, to assume limitations on human abilities because one has a preconceived model that builds in such limitations is not in itself proof that being able to surpass that model is the same as being "better" than human.
Great reply,
David Bohm wrote a book, Thought as a System.
I think we know too little to comfortably (ie based on evidence) put to rest the argument of how a mind functions. I do take an emergent perspective on intelligence and that colours what I think can ground a mind. The brain can ground a mind and a brain will have physical limitations. To what extent those the physical limitations of the brain are also limitations on the mind is not known.
But, let's just say we accept the brain is some form of neural net - the reality is we're not even close to building a neural net that matches the scale of the brain. That's even before going into how we train that neural net. In a practical sense, the level of sophistication is not there yet.
True AI has not been developed!
I'm not certain that's true. Sure, we don't have a general AI that can act like a human, but there has been astounding development in AI in the last few years. Much of it is due to increased access to data, processing, and pre-built libraries. Many useful techniques were created back in the 80's, and have only become apparent how useful recently.
General intelligence might be far off, but current AI is proving very useful and effective, and even out performing humans in some tasks, due to dedication, and access to data. Humans don't sit around looking at millions of MRI's until they can differentiate the ones that have cancer with a high percentage. They work on far fewer examples.
Astounding progress for sure. Though identifying things like cancer has a proveable measurement that they can work from and the AI is identifying the presence of real objects. Quality, a cultural construct, is not fixed which makes the predictive value of an AI that learned from historic data not so useful.
It's that ability to learn from a few examples that leads the Idealists to think that humans do something quite special.
Here's an AI that is good at judging creativity with the benefit of a ton of hindsight.
It doesn't have to be perfect though. The question is, if given enough data on people's opinions of what quality is, could it differentiate quality enough to be useful? I'm not certain that's not a yes. Perhaps there are quite a few cases where it would fail. Could it bring some quality to the top though? Lets say it was built as an upvote bot, with 70% accuracy of what a statistically significant portion of the audience thinks are quality articles. Would that not be helpful?
Not saying it would be worth the time to build, but it would probably be helpful. Also likely a fun project to play around with.
Such a bot would be very useful for a particular community to have. Provided that bot is not the only means for assigning rewards within the community then it's false-positives will not be so bad. Particularly if the bot kept on learning what the community liked.
Where I to produce a user interface for steem - this would be my point of difference. That there was AIs that learned what individuals would probably like to see. You can do that off very broad metrics and by observing user behaviour. But, this is an AI at the level of the tools and it acts as an assistant to users.
However, if the bot had way too much SP then it becomes economically attractive for bad actors to learn how to fool the bot. So, yeah, I'd rather tune the bot to learn individual preferences to get around that.
Some commentators think that Searle Strong AI or AGI is not actually attainable. I don't share their views but I do think that AGI requires a level of sophistication that AI is not yet capable of. We are safe for now.
I think Strong AI is achievable, but it's not an easy task!