det.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon Server des Unterhaltungsfernsehen Ehrenfeld zum dezentralen Diskurs.

Administered by:

Server stats:

1.8K
active users

#computerscience

3 posts3 participants0 posts today

In the second half of the 2000's, wireless #mesh networks of mobile radios were called #MANETs (mobile ad-hoc networks) and have been suggested for different scenarios, including disaster area and first responder communications.

#Research questions for #ComputerScience and #engineering include performance, efficiency, and robustness of routing algorithms as well as different security aspects.

To my knowledge, these networks are still not seriously used as of today.

researchgate.net/publication/2

For students in grades 10 to 13 in UK schools who are interested in #math and #physics , there is COMPOS, essentially an online STEM club organized by the Uni of Oxford, targeted at students whose school only offers sports clubs. It is in the process of being extended to also cover #biology #chemistry and #computerScience . Registration is open until 28 September. compos.web.ox.ac.uk/

compos.web.ox.ac.ukHomeFind out about the University of Oxford's physics outreach programme

🖥️ **How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models**

🔗 doi.org/10.48550/arXiv.2507.03.

arXiv logo
arXiv.orgHow Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language ModelsLarge language models (LLMs) exhibit strikingly conflicting behaviors: they can appear steadfastly overconfident in their initial answers whilst at the same time being prone to excessive doubt when challenged. To investigate this apparent paradox, we developed a novel experimental paradigm, exploiting the unique ability to obtain confidence estimates from LLMs without creating memory of their initial judgments -- something impossible in human participants. We show that LLMs -- Gemma 3, GPT4o and o1-preview -- exhibit a pronounced choice-supportive bias that reinforces and boosts their estimate of confidence in their answer, resulting in a marked resistance to change their mind. We further demonstrate that LLMs markedly overweight inconsistent compared to consistent advice, in a fashion that deviates qualitatively from normative Bayesian updating. Finally, we demonstrate that these two mechanisms -- a drive to maintain consistency with prior commitments and hypersensitivity to contradictory feedback -- parsimoniously capture LLM behavior in a different domain. Together, these findings furnish a mechanistic account of LLM confidence that explains both their stubbornness and excessive sensitivity to criticism.