A few days ago, I was trying to fine-tune an open-source Large Language Model (LLM) - or, take an existing model and tweak it with new data - on my Windows laptop.
The big concept in this is the core difference between experts and non-experts in any given field. Experts aren't faster at doing things, they just spend less time on paths that won't work. They also recognize when something isn't going to work sooner / have better heuristics for knowing they're on the right track.
Sometimes I think the best path to developing expertise is just being allowed to explore the paths that don't work deep enough that you actually hit the point of failure. It sticks better than someone saying "go this way because that way won't work".
The big concept in this is the core difference between experts and non-experts in any given field. Experts aren't faster at doing things, they just spend less time on paths that won't work. They also recognize when something isn't going to work sooner / have better heuristics for knowing they're on the right track.
Sometimes I think the best path to developing expertise is just being allowed to explore the paths that don't work deep enough that you actually hit the point of failure. It sticks better than someone saying "go this way because that way won't work".