E. T. Jaynes' subjectivism vs measurement of distributions
$begingroup$
In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.
With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.
statistical-mechanics entropy probability information
$endgroup$
add a comment |
$begingroup$
In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.
With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.
statistical-mechanics entropy probability information
$endgroup$
add a comment |
$begingroup$
In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.
With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.
statistical-mechanics entropy probability information
$endgroup$
In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.
With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.
statistical-mechanics entropy probability information
statistical-mechanics entropy probability information
asked 5 hours ago
BotondBotond
1203
1203
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.
This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.
$endgroup$
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "151"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f461038%2fe-t-jaynes-subjectivism-vs-measurement-of-distributions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.
This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.
$endgroup$
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
add a comment |
$begingroup$
Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.
This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.
$endgroup$
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
add a comment |
$begingroup$
Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.
This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.
$endgroup$
Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.
This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.
answered 3 hours ago
Ján LalinskýJán Lalinský
15.1k1334
15.1k1334
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
add a comment |
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
$begingroup$
I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
$endgroup$
– Botond
2 hours ago
1
1
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
$begingroup$
The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
$endgroup$
– Ján Lalinský
1 hour ago
add a comment |
Thanks for contributing an answer to Physics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f461038%2fe-t-jaynes-subjectivism-vs-measurement-of-distributions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown