https://ccolas.github.io/
with induction you can search because you have a metric to optimise (% train examples correct)
with transduction there is no clear metric to guide search / brute force, so the model needs to get it right? or to come up with a way to guide its own search
with induction you can search because you have a metric to optimise (% train examples correct)
with transduction there is no clear metric to guide search / brute force, so the model needs to get it right? or to come up with a way to guide its own search
with transduction you can't filter
the challenge rules say you can submit only 2 (3?) solutions per pb
with transduction you can't filter
the challenge rules say you can submit only 2 (3?) solutions per pb
if the o3 prompt that circulates is correct, the o3 score uses transduction (predicting output grid directly), and you can't hill climb there
you can ensemble, but that doesn't help much for hard pbs
if the o3 prompt that circulates is correct, the o3 score uses transduction (predicting output grid directly), and you can't hill climb there
you can ensemble, but that doesn't help much for hard pbs
1) that guy got 50% on the public test set, which is easier than the private test set where o3 reached the 85%(87?)
1) that guy got 50% on the public test set, which is easier than the private test set where o3 reached the 85%(87?)
it seems they don't use program induction, so they can't hill climb on training examples either
it seems they don't use program induction, so they can't hill climb on training examples either
we have a cool workshop on intrinsically motivated open-ended learning with a blend of cogsci and ai on dec 15
@IMOLNeurIPS2024 on X
see program here: imol-workshop.github.io/pages/program/
we have a cool workshop on intrinsically motivated open-ended learning with a blend of cogsci and ai on dec 15
@IMOLNeurIPS2024 on X
see program here: imol-workshop.github.io/pages/program/
autotelic rl is usually concerned with open-ended exploration in the absence of external reward
how should we conduct an open-ended exploration *at the service* of an external task?
deep rl skills required
autotelic rl is usually concerned with open-ended exploration in the absence of external reward
how should we conduct an open-ended exploration *at the service* of an external task?
deep rl skills required
we wanna study how llm-based agents can be used to facilitate collective intelligence in controlled human experiment where groups of participant collectively find solutions to problems
this requires some background in cogsci + llms
we wanna study how llm-based agents can be used to facilitate collective intelligence in controlled human experiment where groups of participant collectively find solutions to problems
this requires some background in cogsci + llms