Nameless1995
Nameless1995 t1_j6c0cri wrote
I don't think anything about Darwin or Nietzsche suggests "logic being about anything else". We can create any arbitrary ranking, but I don't see any privileged reason to put anything above anything else. Logic helps us maintain formal consistency and can be a valuable tool among many, but people don't go around treating logic as "somehow above everything" (whatever that even means). And sure, even if it is above everything, you can always re-change or broaden a concept to argue for anything. You can stipulate God to be that which is above anything, and then make God come to be, by making something above everything by some ranking criterion. But that doesn't really tell us anything interesting. That's just changing the intended references of the words and their usages to have conclusions that superficially appears to have some meaningful content (beyond being linguistic cheats).
> AI has now reached the point where it can produce logic at better than human levels in some instances and will only continue to rapidly improve
Not really though. It still struggles in logical questions (try asking some questions from LogiQA to chatgpt); let alone engagement in metalogic and such. May be someday it will, but not yet.
Moreover, logic is different from capacities to do logic. Logic is above everything doesn't mean that the system which is capable of doing logic is above everything. So the argument is not only just word games but also invalid.
Nameless1995 t1_j43ku48 wrote
Reply to [R] Is there any research on allowing Transformers to spent more compute on more difficult to predict tokens? by Chemont
Universal Transformer: https://arxiv.org/abs/1807.03819
Ponder Net: https://arxiv.org/abs/2107.05407
Deep Equilibrium Net: https://arxiv.org/abs/1909.01377
http://www.gatsby.ucl.ac.uk/~balaji/udl2021/accepted-papers/UDL2021-paper-072.pdf
Nameless1995 t1_j3gw6tx wrote
Reply to comment by IAloneTheyEverywhere in For the émigré philosopher Imre Lakatos, science degenerates unless it is theoretically and experimentally progressive by ADefiniteDescription
(1) Your comment suggests (even if you didn't explicitly state it) that the author hasn't taken more than 1 undergrad physics course. However, the author has a doctorate in chemical physics, and have written textbooks on physics that are published by Oxford university press. It's highly unlikely that he haven't taken any class in physics.
(2) If you didn't meant to suggest that the author is just a "wanna be philosopher of science" with no science education, then the sudden call for (even if hyperbolic) ban on phil majors has no relevance to OP.
(3) Moreover your comment also suggest that phil. majors are somehow the problem in some unique sense (why not ask for banning anyone who haven't taken a QM course from talking about QM). But you provided no example whatsoever of phil. majors in general (discounting one or two possible exceptions) causing ruckus spreading misinformation on QM. So it's not clear if you are even thinking of phil. majors or just random people in internet who engage in philosophy and QM (without being educated in either).
Your comment, thus, seems like either making unwarranted suggestions (that could have been easily fact-checked as /u/tiredstars suggested) or completely orthogonal to the OP article and its author.
Nameless1995 t1_j3dr4x3 wrote
Reply to comment by IAloneTheyEverywhere in For the émigré philosopher Imre Lakatos, science degenerates unless it is theoretically and experimentally progressive by ADefiniteDescription
The author is a not a philosopher of science. His website claims that he is a pop-science writer with a PhD in chemical physics: http://www.jimbaggott.com/.
> so many wannabe philosophers of science
Example?
Nameless1995 t1_j2mc2iv wrote
Reply to Atheistic Naturalism does not offer any long-term pragmatic outcome of value when compared to Non-Naturalist views, such as Theism by _Zirath_
> You know, you're right, we're probably going to die
Yes, sure.
> Naturalism is a belief that entails infinite negative utility for the adherent.
How?
> death is an infinite loss
First, it's strange to assign inifnite loss to non-existence.
Second, belief in Naturalism doesn't entail you will die; Naturalism (as believed in its current form) entail you will die. You have to differentiate between what is entailed from adopting a belief, and what is entailed from the content of the belief being true. You can believe in Naturalism, but may be theists are right! May be the naturalist won't die but happily suffer in hell for all eternity! Or perhaps there is a weird god who only allows Naturalism in heaven and make all theists suffer! and so on so forth. So even belief in Naturalism can only potentially lead to "infinite loss" or "infinte gain". Which makes believing in Naturalism again on par with believing in Theism.
Nameless1995 t1_j1yqoxb wrote
Reply to comment by CryptoTrader1024 in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
You can just check SEP:
https://plato.stanford.edu/entries/compatibilism/
https://plato.stanford.edu/entries/compatibilism/supplement.html
They have different specific accounts for compatibilism -- example higher-order theories of freedom (from Frankfurt and others), Reason-responsiveness views, and there are also compatibilist variants of "ability to do otherwise".
Also compatibilists are trying to make many different points:
-
Some may argue that what we actually want to "track" by freedom and what we care about are compatible with determinism. This can involve some thought experiments and arguments as to how incompatibilist "ability to do otherwise" doesn't really offer anything much.
-
They may argue that "ability to do otherwise" itself is compatible with determinism if ability is understood in a unloaded/unbloated sense.
-
They often want to argue not only that we have compatibilist free will, it's also moral responsibility inducing. Which is a substantive point and not just "shrugging".
-
They may attack incompatibilist intuitions for example -- they may provide cases where it feels intuitive to assign praise even when the person says they are compelled by their nature to do some good, or they may argue lack of meta-wants or meta-meta-wills to control oneself and such are unnecessary demands and not clear why necessary for moral responsibility. And so on.
-
They may also provide x-phi support that ordinary humans have elements of compatibilists intuitions.
> This sort of solution essentially splits freedom into two concepts: the type of freedom we recognize in everyday life, and freedom from the laws of causality. Since the latter is impossible, it makes no sense to draw any kind of moral consequence from it, and one must therefore focus on the former. This is rather unsatisfying because it feels like the philosophical version of a shoulder shrug.
But that sounds more favorable to compatibilism than against. If the compatibilist's version of freedom is the very freedom we recognize and talk about in everyday life, what's the practical value and meaning of this "freedom from laws of causality" (which you yourself recognize to be ultimately seemingly incoherent, because to be free from causation is make actions free from the actor which would again be no freedom at all)? So why should anyone bat an eye or lament or celebrate the non-existence of some concept that cannot be even legibly conceived of? It's also not clear if moral responsibility is necessarily threatened by lack of such "freedom from causality". Backward-looking punishment can also be independently argued against. So we don't have to worry about that.
Personally, I am not a compatibilist. I am just trying to give credit where it's due.
Nameless1995 t1_j1yoxv3 wrote
Reply to comment by CryptoTrader1024 in An Argument in Favour of Unpredictable, Hard Determinism by CryptoTrader1024
> The compatibilists would argue that free will merely means freedom from compulsion.
They don't though.
Nameless1995 t1_j1xeo4l wrote
Reply to [D] Has any research been done to counteract the fact that each training datapoint "pulls the model in a different direction", partly undoing learning until shared features emerge? by derpderp3200
There is a literature related to taking gradient agreement/conflict into account for different motivations (usually different from the exact motivation in OP).
This is one place to start looking: https://arxiv.org/abs/2009.00329 (you can find some related work from the citations in google scholar/semantic scholar)
Nameless1995 t1_j0olnhi wrote
Reply to comment by CalligrapherFine6407 in [D] ChatGPT, crowdsourcing and similar examples by mvujas
One reason for confidence-sounding responses could be that internet data (in which it is trained) generally consists of confident sounding answers. Many humans are also confidently think they are righ while being wrong. Besides it doesn't have the ability nor is it exactly trained to model "truthfulness". So it may just maintain the confident-sounding style indiscriminately whether it's speaking truth or fiction (although it can probably adopt a "less confident" attitude if explicitly asked to role play as such but then it may just be less confident indiscriminately).
While OpenAI may have found some ways to make it more cautious (not necessarily adopting less confident styles, but denying response when more "uncertain" (probably based on perplexity or something IDK exactly how the enforce cautiousness)):
See:
https://openai.com/blog/chatgpt/
> ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows. > ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
Nameless1995 t1_j0lri08 wrote
Reply to comment by mvujas in [D] ChatGPT, crowdsourcing and similar examples by mvujas
I just had a thought. I think resampling with "try again" button itself can be used as a feedback (a noisy signal for the "user didn't like the earlier version"). Moreover if a user switches back to the earlier sample that can be another feedback (the earlier version being preferred more). They can get a lot of data from these. I expect users to be using "try again" more frequently that upvotes/downvotes.
Nameless1995 t1_j0a97yt wrote
Reply to comment by Purplekeyboard in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
> But it would all be nonsense.
Modeling the data generating rules (even if arbitrarily created rules) and relations from data, seems to be close to "understanding". I don't know what would even count as a positive conception of understanding. In our case, the data that we recieve is not just generated by an arbitrarily created algorithm, but by the world - and so the models we create helps us orient better to the world and in that sense "more senseful", but at a functional level not necessarily fundamentally different.
More this applies to any "intelligent agent". If you feed it arbitrary procedurally generated data what it can "understand" will be restricted to that specific domain (and not reach the larger world).
> GPT-3 only knows the text world, it only knows what words tend to follow what other words.
One thing to note that text world is not just something that exists in the air, it is a part of the larget world and created by social interactions. In essence they are "offline" expert demonstrations in virtual worlds (forums, QA, reviews, critics etc.).
However, obviously, GPT3 cannot go beyond that, and cannot comprehend the multimodal associations (images, proprioception, bodily signals etc.) beyond text (it can still associate different sub-modalities within text like programs vs natural texts and so on), and whatever it "understands" would be far alien from what a human understands (having much limited text data, but much richer multimodally embodied data). But that doesn't mean it doesn't have any form of understanding (understood in a functionalist (multiply realizable) sense -- ignoring any matter about "phenomenal consciousness") at all; and moreover, none of these mean somehow "making likely prediction from statistics" is dichotomous with understanding.
Nameless1995 t1_j09pzkf wrote
> “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?”1
> Even if an LLM is fine-tuned, for example using reinforcement learning with human feedback (e.g. to filter out potentially toxic language) (Glaese et al., 2022), the result is still a model of the distribution of tokens in human language, albeit one that has been slightly perturbed.
....I don't see what's the point is.
I have an internal model of a world developed from the statistics of my experiences through which I model mereology (object boundaries, speech segmentation, and such), environmental dynamics, affordances, and the distribution of next events and actions. If the incoming signal is highly divergent from my estimated distribution, I experience "surprise" or "salience". In my imagination, I can use the world model generatively to simulate actions and feedbacks. When I am generating language, I am modeling a distribution of "likely" sequence of words to write down conditioned on a high level plan, style, persona, and other associated aspects of my world model (all of which can be modeled in a NN, and may even be implicitly modeled in LLMs; or can be constrained in different manners (eg. prompting)).
Moreover in neuroscience and cognitive science, there is a rise of predictive coding/predictive error minimization/predictive processing frameworks treating error minimization as a core unifying principle about function of the cortical regions of brains:
https://arxiv.org/pdf/2107.12979.pdf
> Predictive coding theory is an influential theory in computational and cognitive neuroscience, which proposes a potential unifying theory of cortical function (Clark, 2013; K. Friston, 2003, 2005, 2010; Rao & Ballard, 1999; A. K. Seth, 2014) – namely that the core function of the brain is simply to minimize prediction error, where the prediction errors signal mismatches between predicted input and the input actually received
> “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?”1
One can argue the semantics of whether LLMs can be understood to be understanding meanings of words if not learning in the exact kind fo live physically embedded active context as humans or not, but I don't see the point of this kind of "it's just statistics" argument -- it seems completely orthogonal. Even if we make a full-blown embodied multi-modal model it will "likely" constitute a world model based on the statistics of environmental-oberservations, providing distributing of "likely" events and actions given some context.
My guess it that these statements makes people think in frequentists terms which feels like "not really understanding" but merely counting frequencies of words/tokens in data. But that's hardly what happens. LLMs can easily generalize to highly novel requests alien to anything occuring in the data (eg. novel math problems, asking about creatively integrating nordvpn advertisement to any random answer and so on - even though nothing as familiar appear in the training data (I guess)). You can't really explain those phenomena without hypothesizing that LLMs model deeper relational principles underlying the statistics of the data -- which is not necessarily much different from "understanding".
Sure, sure, it won't have the exact sensori-motor-affordance associations with language; and we have to go further for grounding; but I am not sure why we should be drawing a hard line to "understanding" because some of these things are missing.
> These examples of what Dennett calls the intentional stance are harmless and useful forms of shorthand for complex processes whose details we don’t know or care about.
The author seems to cherry-pick from Dennett. He is making it sound as if taking an intentional stance is simply about "harmless metaphorical" ascriptions of intentional states to systems; and based on intentional stance we can be licensed to attribute intentional states to LLMs.
But Dennett also argues against the idea that there is some principled difference between "original/true intentionality" vs "as-if metaphorical intentionality". Instead Dennett considers that to be simply a matter of continuum.
> (1) there is no principled (theoretically motivated) way to distinguish ‘original’ intentionality from ‘derived’ intentionality, and
> (2) there is a continuum of cases of legitimate attributions, with no theoretically motivated threshold distinguishing the ‘literal’ from the ‘metaphorical’ or merely
https://ase.tufts.edu/cogstud/dennett/papers/intentionalsystems.pdf
Dennett seems also happy to attribute "true intentionality" to simple robots (and possibly LLMs (I don't see why not; his reasons here also applies to LLMs)):
> The robot poker player that bluffs its makers seems to be guided by internal states that function just as a human poker player’s intentions do, and if that is not original intentionality, it is hard to say why not. Moreover, our ‘original’ intentionality, if it is not a miraculous or God-given property, must have evolved over the eons from ancestors with simpler cognitive equipment, and there is no plausible candidate for an origin of original intentionality that doesn’t run afoul of a problem with the second distinction, between literal and metaphorical attributions. ‘as if’ cases.
The author seems to be trying to do the exact opposite by arguing against the use of intentional ascriptions to LLMs in a "less-than-metaphorical" sense (and even in the metaphorical sense for some unclear sociopolitical reason) despite current LLMs being able to perform bluffing and all kind of complex functionalities.
Nameless1995 t1_j09m8ir wrote
Reply to comment by VordeMan in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
Footnote 1 Page 2. It's a bit of a wishy washy statement with no clear point but he does mention RLHF.
Nameless1995 t1_j09eifz wrote
Reply to comment by economy_programmer_ in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
/u/mocny-chlapik thinks OP paper is suggesting that LLMs don't understand by pointing out that differences in how humans understand and how LLMs "understand". /u/mocny-chlapik is criticizing this point by showing that this is similar to saying aeroplanes don't fly (which they obviously do under standard convention) just because of the differences in the manner in which they fly and in which birds do. Since the form of the argument doesn't apply in the latter case, we should be cautious of applying this same form for the former case. That is their point. If you think it is not a satire meant to criticize OP, why do you think a comment is talking about flying in r/machinelearning in a post about LLMs and understanding?
Nameless1995 t1_j09c3f0 wrote
Reply to comment by economy_programmer_ in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
It was a satire.
Nameless1995 t1_j05chgd wrote
Reply to [D] Why are ChatGPT's initial responses so unrepresentative of the distribution of possibilities that its training data surely offers? by Osemwaro
> If I repeatedly make the same request in the same thread, these characteristics of the responses do display more diversity, but the responses all have the same structure (e.g. the same number of paragraphs, and often near-identical sentences in corresponding paragraphs).
For proper results you should resample by clicking the "try again" button (or thread reset). Otherwise if by chance the first sample talks about a woman scientist named Samantha, all the later responses would be biased by that. Your next samples won't be independent by selectively biased based on the initial sample. To control for that, when comparing multiple samples you should make sure they are sampled under the similar conditions besides differences in rng (i.e use the "try again" given the same past conversation, or ask all of them in reset state).
> So I tried a simpler request, of giving me the name of a vegetable. I asked 35 times, and it said "carrot" 30 times and "broccoli" 5 times. The results of all my vegetable-name interactions are here. I also tried asking it to name an American president in 6 threads, and it said "George Washington" each time, and I tried asking it to name an intelligent person, and it usually said Albert Einstein, although it did occasionally say Stephen Hawking.
Sounds about expected.
Nameless1995 t1_j03ph06 wrote
Reply to comment by Interesting-Notice58 in /r/philosophy Open Discussion Thread | December 12, 2022 by BernardJOrtcutt
Inertia.
Nameless1995 t1_j02eqdx wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
I am not talking skeptics who denies values per se, but inherent stance-independent values. So the radical skeptic may brutely stance-dependently value reason, his-own-freedom and such but not believe that reason has inherent agent-independent value, or that freedom-as-such or even his-own-freedom has inherent value beyond the psychological contigencies of people relating to them in a "valuing" manner. Thus the radical skeptic is not sure if value is a thing or a property rather than being a process-in-act -- a "value-ing" associating with how the agent relates to a thing, concept, or a capacity.
And moreover, the skeptic may be a skeptical towards moral realism (beyond there being game-theoretically stable principles for agents to modulate their "powers" by considering trade-offs involving different valuing of different agents)
Nameless1995 t1_j0067m8 wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
So the argument is only aimed at skeptics who accepts the notion of "inherent value"? Not at a more radical skeptic who is skeptical of the very notion of the possibility of values being "inherent" in object in a stance-independent sense?
Nameless1995 t1_izxxy5k wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
> If it’s something physical like the body
It could be the physical body, the organism, it could be some non-physical soul; we can be agnostic to the metaphysics. But yes, we can go along with the particular physical body.
> difference is still arbitrary
But what makes a difference "arbitrary"? And what's wrong with the Skeptic valuing some "arbitrary" difference?
> For instance, if the cup on my desk has a certain value, it has that value regardless of what desk it happens to be on.
Let's go with this example. Perhaps there is a skeptic who finds the cup valuable only if it is arranged in the desk in a certain way but not otherwise. He doesn't find the cup in itself valuable. So what is the problem with that? The fundamental values can be just brute physcological impulses; why should the skeptic need to provide any reason and justification for that? Similarly the skeptic may not find freedom by itself valuable, simply freedom as possessed by himself - the physical organism (or whatever).
> It wouldn’t make sense for it to change value if its physically on another desk (or if it did, that would require an additional premise that I’m not assuming)
What additional premise? The point I am making is that people are not compelled to value some high-level universals. They can value particulars with specific relations to their own physical embodied system and history. You can't just say it's all "arbtirary" differences.
> And any equivalent cup would have the same value.
Not necessarily. A skeptic (or even any normal person), may value a certain cup more because of the specific history they share with the cup. An otherwise materially equivalent cup may not just have the same value for the skeptic (of course, we can fool the skeptic by replacing the valued cup with a replica and misrepresent the value, but that's irrelevant).
Nameless1995 t1_izwvjd4 wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
> justification to value their own freedom.
Skeptic is interested in being consistent. They can't say "I value x but I don't value x", and they can't say "I value freedom as such but I don't value x's freedom" and so on. But as long as they are consistent, they don't see the need to provide justifications for why they value what they value. The skeptic can say they value "their own freedom, but not others'", it seems completely consistent to me. The "difference" is merely that their own freedom is the capacity that they have, and they can exercise; and the skeptic values things that are related to themselves in an empowring manner.
You can say that's an "arbitrary" difference. But I am not sure what criteria for "non-arbtirariness" is here. Any "random" difference should go to allow the skeptic being consistent. You may say that the skeptic has to justify why the skeptic cares about the "arbitrary difference". But, it seems odd to ask justification for "values", because they usually turn into explanation in terms of other "values". It's not the kind of thing that can be derived from laws of logic. The skeptic may be Humean; allowing reason to be slave to passions, and allow some values to be just brute psychological force (like hunger). The skeptic values 75% dark chocolate rather than chocolate in general, because he just does. Similarly the skeptic values things that increases the power of self (like the particular capacity of freedom (not freedom in general) that they possess) because the skeptic simply does.
> We understand that free beings have value compared to non-free beings (inanimate objects). We wouldn’t have a reasonable justification to prioritize only our own freedom is freedom is equal.
I don't "understand" that free beings "have" value. I simply brutely find myself respecting the freedom of others as my my own given no overriding reasons.
Nameless1995 t1_izv9hw3 wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
> What the painting has is sentimental value
To the person.
> Yet freedom, on the other hand, is agency itself. It can be thought of instead as a possession. It doesn't depend on an agent's perspective because that's what freedom is, an agent's perspective.
This seems a bit spurious. An agent can quite coherently take freedom to a form of capacity, and they can value their capacity. Moreover, I don't see what relevance here is for reflexivity. A self-conscious being valuing their self-conscious is valuing, in a sense, that which they are reflexively, but that doesn't make the valuing ~agent-relative. Even if we accept the freedom is an agent's perspective itself (which is a very weird phrasing), there is no clear incoherency in an agent reflexively valuing their own perspective/or their own agency -- in relation to their own agent-perspective itself.
If we say that kind of valuing is "illegal", it's not clear to me what kind of "value" any freedom is even left with.
Moreover, there is a trivial sense, in which freedom can be differentiated. For example, one agent can be free to act in different ways, and another agent can be barred (perhaps imprisoned; shackeled). Freedom in concrete instantiation, is then tied to particular agents.
Although, you could make a case if the skeptic valued freedom as such (in that case, the skeptic to be consistent may need to value freedom for all if we assume everyone shares the relevant kind of potential for freedom), but the skeptic may start with valuing "own-freedom" (the specific freedom that exists in the specific relation to oneself) rather than "freedom as such". The reason to move towards valuing freedom as such opposed to the particular capacity of freedom existing in the specific relation to oneself seems to be still missing.
Nameless1995 t1_izv1iph wrote
Reply to comment by contractualist in Why You Should Be Moral (answering Prichard's dilemma) by contractualist
> If someone were to say that a valued sentimental value, they wouldn’t be acting according to that value if they ripped up that painting. The painting has sentimental value, regardless of who imposes that value onto it.
Can you elaborate what you mean? If someone were to say that they have a sentimental value towards a painting; then yes, given no good overriding reasons, they wouldn't rip the painting because that would go against them valuing the painting.
But that example only demonstrates that the paining has sentimental value for the particular subject who values them.
That doesn't say anything, however, whether the value exists for others as /u/timbgray was concerned about. It may but it doesn't seem like it need to.
In fact, the example makes more sense if we think of the "valuing" as a relational-functional orientation of the subject towards an object that induces certain behaviorial dispositions which allows folk-psychological predictions (like the subject will be resistent to ripping the painting apart, the subject will be upset if the painting is ripped etc.).
Instead you seemed to be making the "value" intrinsic to the painting itself as if values can be "imposed" to objects by subjects, and the values remain imposed even after the death of all subjects. A skeptic can easily resist this move. The same move is also made in the article where value of freedom valuable by some objective standard, thus self-freedom is made as valuable as other-freedom.
Nameless1995 t1_izsgrc4 wrote
Reply to comment by JHogg11 in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
Chalmers himself ride in-between a form of information-dualism position and panpsychism/panprotopsychism. He tends to think any formal functional-organization of a relevant kind (no matter at which level of abstraction?) would have a corresponding consciousness (based on his dancing qualia/fading qualia thought experiments). So he find it plausible that artificial machines can be conscious.
Nameless1995 t1_j6c2n1c wrote
Reply to comment by No_Maintenance_569 in God Is No Longer Dead! (A Kritik of AI & Man) by No_Maintenance_569
> If it's not logic, I have a lot of questions for you. I think it's deeper than that too but I don't want to defend that
Can you clarify what do you exactly mean by "logic"? Do you know there are thousands of systems of logic and not all of them completely agree with each other. There are Aristotlean logic, classical logic, intuitionist logic, different paraconsistent logic (including trivialism), fuzzy logic, many-valued logic, modal logic, inductive logic, relevant logic, free logic, so on so forth. Some rejects things like law of excluded middle, some even rejects principle of contradiction.
Second, taking the standard classical logic developed from Frege (the most dominant one), it helps us ensuring that our framework is consistent formally. But consistency is only one factor out of many.
This is a logically valid and consistent argument: "Premise: Bananas fly. Conclusion: Bananas fly" (laws of identity). But it's not a sound argument. Logic helps us preserving truth (making truth-value preserving transformations), but it doesn't tell us what is true. And real-life application of reasononing involves induction, abduction, appeal to simplicity, unity, elegance, proximity to common sense among a whole host of things in ranking "frameworks". Logical consistency may be a necessary demand for selecting a framework but far from sufficient.
Besides, even if logical consistency is the prime factor to choosing frameworks, it still doesn't mean it's "above everything else". For example, logical consistency doesn't by itself give me a sense of aesthetic value. Being logically consistent doesn't alone (it may be one thing among many that helps) help me with being rich, having love, or gaining nirvana. Logic doesn't tell me what to value or provide any values, logic provides a tool to be consistent and achieve what I value efficiently. So again, I don't see why I should rank logic above other things like intellectual pleasure, aesthetics, well-being etc.
> My argument is simply a logical proof.
You can prove anything by changing meanings of word. It's just not interesting to anyone.
For example I can say:
P1: Anything that flies are bananas.
P2: Aeroplanes are things that fly.
C: Aeroplane is a banana.
It's a formally valid proof, but it's not interesting to anyone, because I am just using false premises, or merely using language in a unconventional way.