lichess.org
Donate

Science of Chess: A g-factor for chess? A psychometric scale for playing ability

Also, general intelligence, go read about that on Wikipedia, a real journey. For me, it looks like the "power law" it has little mechanistic insight into cognition. It might be the chess player rating version of cognitive sciences measures. The invariant quantity.... (or instantaneous forehead stamp value). If something ought to be multidimensional even more than chess, that would be that blob of a concept.

Expecting magical correlation between 2 floating measures, does not seem that rational. We already know about the pool dependency of one, then how would that relate to dimensionless 1 dimension projection of many dimensions of psychometrics?

One thing is always true, with all full definitions made visible, we can 100% say that each 1D thing is measuring what they measure, by their definition, the interpretation, on the other hand, might not be a science from there. Feels good to say grounded things once in a while.
@ForeverSionce said in #12:
> I don't have general intelligence nor am I good at chess

That's not true! Just remember the lowest rating on Lichess is 400.
@ForeverSionce said in #15:
> That's only for brand new people who don't study

says the person rated higher than me :) look you aren't bad at chess!
I'm a doctoral psychology student who's done a bit of bifactor modeling with the p-factor. Are you thinking of maybe doing some model comparisons using the ACT data? Compare, say, a one-factor, second order and bifactor model and test which has the best fit.
@tackyshrimp said in #17:
> I'm a doctoral psychology student who's done a bit of bifactor modeling with the p-factor. Are you thinking of maybe doing some model comparisons using the ACT data? Compare, say, a one-factor, second order and bifactor model and test which has the best fit.

That would be pretty cool. If you haven't already, do check out what else the authors have to say about their exploratory analysis. Not the same as what you're describing, but there are more details there than I wanted to get into in the article. I bet there's plenty of room for secondary analyses of the ACT data, but I'd leave that to folks who know those modeling techniques better than I do!

I'll tell you a model comparison project I haven't seen yet and would love to try out, though: I did a very crude factor analysis on Lichess Bullet, Blitz, Rapid and Puzzle ratings, but would love to expand that to include variants to see what the structure of that data is. I keep meaning to read up on data scraping to get a big collection of ratings off the site but just haven't gotten my act together yet. If you're interested, I'd be very keen to see what happens.
would you mind sharing the crude? maybe if not blog, a flashback visit into the forum? but what world of factors did you consider? Stuff from the chess itself, I assume. (not having read the most important part of blog, as if I was fearing the moment when I would not be in anticipation, but behind me).. kidding. just got distracted to some old pet fog of mine.. (nevermind).

My hope is there is some chess in ACT. I date not look yet. I don'T want to burst that bubble of hope.
I didn't read the papers, so sorry if this is an elementary question, but it looks like the Amsterdam Chess Test was composed intuitively at first (i.e. the authors just threw in what they thought might be related). So there's guarantee that this is the best possible test, or even a particularly good one right?

Looking at the correlations, it seems the tactics-related components were much more correlated to Elo. Is it possible that any validity is due to the choose-a-move/tactics components? My intuition (based on nothing) is that the recall, verbal knowledge, and motivation pieces aren't useful.

If it basically comes down to tactics, then perhaps any tactics trainer (like puzzle storm) would be as good as the ACT.