Jump to content

Position of elevator when plane is parked on ww2 plane


Recommended Posts

AndyJWest
Posted
45 minutes ago, AEthelraedUnraed said:

(do flight manuals even contain information about elevator droop?)?

 

I've looked at a great many, and don't recall ever seeing the slightest discussion of the topic. Which is unsurprising, since this thread has provided zero verifiable evidence that anyone gave a damn about it during the period we were discussing. If you were in the aircraft and you wanted the control surface somewhere, you moved it there. If you weren't in it, it wasn't your problem.

  • Like 1
Dragon1-1
Posted
9 hours ago, AEthelraedUnraed said:

Then tell me, how does human reasoning work on a cellular or even molecular level? 

Here's the thing, we don't know. And if you are indeed familiar with sciences I mentioned more deeply than a surface level, you should know that, too. I'm making assumptions directly based on what you wrote, not making personal attacks. Your deterministic worldview is characteristic of experts in computing and certain branches of maths, not in natural sciences. 

 

You, however, seem to be making very simplistic assumptions about a system which has, so far, eluded our investigations. I would be very careful with that. Biology has a way of subverting our expectations, and it's not rare to discover that our previous understanding of some biological mechanism was flat-out wrong. It's probably a chain of electrochemical reactions, but within that, there's a lot of potential for exotic interactions on molecular level.

9 hours ago, AEthelraedUnraed said:

If there exists such a mathematical model, then it has been proven that this can be asymptotically approximated by a series of nonlinear operations, which is exactly what a Neural Net is. And which incidentally also happens to be what a patch of interconnected neurons is.

I have no doubt brain can be described using some kind of mathematics. Markov chains might be general enough to be theoretically capable of describing processes in the brain, but this remains to be determined. This really comes back to the question whether human brain can be modeled by a Turing machine.

 

Also, worth noting, such a model would not necessarily be of any use. An NN could be able to implement Shor's algorithm, for instance (it can be computed deterministically, after all), but not in a useful way. If whatever happens in the brain would require a moon-sized NN to even begin to approach it, that mean your model is of little practical use. While it would be useful to philosophers (as this would mean the answer to the above question would be yes, since an NN can run on a Turing machine, and constraints such as the size of the universe can be ignored for discussion's sake), it'd be much less useful to AI engineers.

9 hours ago, AEthelraedUnraed said:

I disagree. Reasoning and knowledge are two different things and I maintain that you do not need the former for the latter.

They are different, I'm not saying they're not, but you need former to create the latter. Otherwise, every book would be full of "knowledge", but without a human to give the characters meaning, it's just a bunch of ink and paper. It is encoded information, nothing more, but this information is not physically meaningful (thermodynamically, molecules arranged in a book are just as good as them arranged in any other way, as long as the entropies match) until it's been read.

 

All that is quibbling over terminology at this point, though. My point is, we do things with information stored in our brains that LLMs approximate in a crude way by brute force. What you want to call it is irrelevant. What's important is that it's distinguished, and to acknowledge LLMs are incapable of doing that.

35 minutes ago, AndyJWest said:

If you were in the aircraft and you wanted the control surface somewhere, you moved it there. If you weren't in it, it wasn't your problem.

Sometimes, like in Mustang and FW-190, it would have mattered for towing the aircraft, since the tailwheel lock is connected to the stick. Stick forward (or centered) unlocks the wheel, stick back locks it. Other than that, though, it wouldn't matter.

  • Like 1
AEthelraedUnraed
Posted
40 minutes ago, Dragon1-1 said:

All that is quibbling over terminology at this point, though. My point is, we do things with information stored in our brains that LLMs approximate in a crude way by brute force. What you want to call it is irrelevant.

What do you mean, "it is irrelevant"? My whole point is about the terminology used! What you want to call it is the very object of my argument. It is the very thing I objected against when I replied to MaxPower. How can you say that the entire point of the discussion is irrelevant to the discussion?

 

Sure, it is irrelevant in the grand scale of things. It doesn't change the fact that indeed, NNs are not able to do everything as good as the human brain is, but that was never the point of the discussion and was also never disputed by me.

Posted

I don't know what ChatGTP is, but it sounds like BS.  This thread has gone way off topic and turned into a debate.  Time it was locked.

Dragon1-1
Posted
1 hour ago, AEthelraedUnraed said:

What do you mean, "it is irrelevant"? My whole point is about the terminology used! What you want to call it is the very object of my argument.

OK, then my argument is that you seem to define the word "knowledge" as a term that is essentially redundant with "information". I defined it as information structured at a level that requires reasoning to reach (especially useful since humans process the majority of information at this level). Since you acknowledge there is a sharp distinction here between humans and LLMs, I propose that the way I define it is more useful.

 

When I said LLMs don't know things, I mean that the connections created by brute force lack this crucial refinement. This is a key reason why LLMs hallucinate, and why they will not admit ignorance. With a big enough hunk of data, you can always find some string of words that'll fit the question. The string might be a load of bull, but it's all the same to the LLM.

36 minutes ago, czech693 said:

I don't know what ChatGTP is, but it sounds like BS.

It is BS, but who are you to decide what gets locked? Don't like it, don't check it. That said, it would be useful to educate yourself ChatGPT and others, since it's not going to be the last time it comes up.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...