det.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon Server des Unterhaltungsfernsehen Ehrenfeld zum dezentralen Diskurs.

Administered by:

Server stats:

1.8K
active users

#cs

5 posts3 participants0 posts today

Sanity check needed :

✢ If a term M is legal in λ2,

✢ And if a term N is also legal in the same context,

✢ Then MN is also legal, right?

---

This seems intuitively obvious but I can't actually find a definition / lemma in my textbook to confirm it.

The Subterm Lemma seems to be the opposite of what I'm seeking. It says that if MN is legal, then the sub terms M and N are legal too.

Continued thread

Why am I asking?

previously I worked out the type of

λxyz. xz(yz)

as

(B → (A → C)) → (B → A) → B → C

where x:A, y:B, z:C

---

now I want to parameterise the types of x,y,z to make a λ2-term and it seems easier if I can do the funny types

λα,β,γ:*. λx:(β → (α→γ)) . λy:(β→α) . λz:β. xz(yz) : Παβγ:*.(β → (α→γ)) → (β→α) → β → γ

If I wasn't allowed to have non-trivial types for x, y, z then the "algebra" on the RHS would be really convoluted

---

2/2

Is it normal to do this kind of thing?

λα,β:* . x:α→β, y: α . xy : Πα,β:*. (α→β) →α →β

That is, to have non-trivial types for objects like x?

--

To make that clearer ...

So instead of just doing x:α, y:β we do things like x:α→β→β and y:β→α

---

1/2

Replied in thread

@jonmsterling

Thanks

---

I previously struggled to find a T such that

Πα:*.α : T

which appears to ask for a 'type of a type'

with some help, I used the formation rule to say T must be *

---

The reason I'm struggling a little with this kind of expression again is I have a more challenging (to me) exercise where I have to find an inhabitant for α and β given the context

α : ∗
β : ∗
x : α →Πα:*.α
f : (α →α) →α

looking at x is why I'm thinking " what is the type of Πα:*.α ?"

My old introduction was very outdated, so it's time to reintroduce myself:
#introduction

Hi 👋, I’m Laura.

I am a transfeminine person, somewhat in the middle of my transition. 🏳️‍⚧️ #trans #transbubble

A major part of my time I spend as a Postdoc in computer science, working on embedded AI and low-power IoT communication. #cs #TinyML #IoT #academia #science

Outside of work, I am active in the local #queer center (board member, GER: Vorstand), I enjoy playing board games, and I listen to too many #podcasts.

advice needed !

I need to find an inhabitant of β given the context

Γ ≡ β : ∗, x : Πα : ∗. α

My brain is telling me that I am free to choose pretty much any type to inhabit β because the context doesn't impose any restriction.

I could have "int" or "bool" or "int → bool", etc and they would all be ok.

Am I right? I'm suffering beginner lack of confidence.

⛓️💥 Broken Tokens? Your Language Model can Secretly Handle Non-Canonical Tokenizations

arxiv.org/abs/2506.19004

arXiv logo
arXiv.orgBroken Tokens? Your Language Model can Secretly Handle Non-Canonical TokenizationsModern tokenizers employ deterministic algorithms to map text into a single "canonical" token sequence, yet the same string can be encoded as many non-canonical tokenizations using the tokenizer vocabulary. In this work, we investigate the robustness of LMs to text encoded with non-canonical tokenizations entirely unseen during training. Surprisingly, when evaluated across 20 benchmarks, we find that instruction-tuned models retain up to 93.4% of their original performance when given a randomly sampled tokenization, and 90.8% with character-level tokenization. We see that overall stronger models tend to be more robust, and robustness diminishes as the tokenization departs farther from the canonical form. Motivated by these results, we then identify settings where non-canonical tokenization schemes can *improve* performance, finding that character-level segmentation improves string manipulation and code understanding tasks by up to +14%, and right-aligned digit grouping enhances large-number arithmetic by +33%. Finally, we investigate the source of this robustness, finding that it arises in the instruction-tuning phase. We show that while both base and post-trained models grasp the semantics of non-canonical tokenizations (perceiving them as containing misspellings), base models try to mimic the imagined mistakes and degenerate into nonsensical output, while post-trained models are committed to fluent responses. Overall, our findings suggest that models are less tied to their tokenizer than previously believed, and demonstrate the promise of intervening on tokenization at inference time to boost performance.
#ai#llm#cs

I wonder whether "fusion trees with multiple roots" exist

what I know after a quick search

  • there is a bonsai technique concerning roots of fusion trees
  • there is a minecraft modpack called fusion forest
  • there is a company called b-forest
  • there is a famous counter strike player named forest
  • there is a subfield named forest informatics
  • there is a greek thing named b dag

Help 😶‍🌫️!

The textbook says that types don't matter in β-reduction.

I've been bashing my head for 3 days and still can't see it.

In the following

(λx:σ.M) N

surely the type of N must match the type of x ??

When the textbook talks about β-substition (not reduction) then it says types matter because you can't substitute A for B if the types don't match.

What am I missing?

New work:

Distributed Transaction Patterns

github.com/ha1tch/dxp/blob/mai

A framework for understanding and implementing distributed transaction patterns through a phase-based spectrum approach.

The DXP documentation is organized as a progressive journey through distributed transaction patterns.

Distributed Transaction Patterns. Contribute to ha1tch/dxp development by creating an account on GitHub.
GitHubdxp/README.md at main · ha1tch/dxpDistributed Transaction Patterns. Contribute to ha1tch/dxp development by creating an account on GitHub.

Why is there no consideration of base cases ?

This is the textbook's explanation of "structural induction". The subsequent worked examples don't refer to base cases either.

Is it because they are always trivially true? I doubt this, myself.

Need some help.

-----

I know that to prove P ⇒ Q, I assume P and derive Q.

----

Now to prove

(A⇒B) ⇒ ((B⇒C) ⇒ (A⇒C))

I need to assume (A⇒B) and derive ((B⇒C) ⇒ (A⇒C)).

But I can't seem to make progress from (A⇒B) alone, I think I need to assume A is true as well.

----

Intuitively the statement makes total sense. But drawing the derivation as per image attached, I get stuck unless I assume A too.

Can anyone help clarify my thinking?