新機能搭載の実験版KataGoエンジン

「eval cache」という新しい機能が追加された実験的なKataGoの実行ファイルが公開されています。
https://drive.google.com/drive/folders/11YaTh0dejdLJyw340dwS6o8_b8hkJeE7?usp=drive_link

この機能を有効にするためには、設定ファイルに以下の行を追加してください。

useEvalCache = true
evalCacheMinVisits = 100

user.png hope366 time.png 2024/10/10(Thu) 08:20 No.1610
Re: 新機能搭載の実験版KataGoエンジン
hope366さん、新しい情報をありがとうございます ひとつ教えてください
この「eval cache」というのはどのような機能なのでしょうか?
README.txtをみても書いてないようです
user_com.png Eba time.png 2024/10/10(Thu) 10:53 No.1611
Re: 新機能搭載の実験版KataGoエンジン
lightvectorさんによる説明文を添付します。
保存されたデータの活用法に関することのようですが、専門的かつ英文ということで、よく理解することはできませんでした。

Here's an experimental branch that adds a long-lived eval cache that remembers positions that have a certain minimum number of visits that survives between repeated searches. Subsequent searches use the cache as a significant bias to hopefully converge faster than without.

https://github.com/lightvector/KataGo/pull/992

For example, if in live interactive analysis (e.g. Lizzie) you go into a variation that KataGo misevaluates or has some blind spot, and search it for a bit so that the evaluation becomes correct or the blind spot move is found, then you go back to a prior position, the new search from the prior spot should make use of the corrected eval within that variation.

It's not perfect at "using" the corrected eval because there's a real challenge in figuring out how to weight the cache vs the fresh search.

You want to smoothly interpolate between the cache and a fresh search as the fresh search gets more visits, but how do you it without double-counting the weight from the cache given that the child nodes of the fresh search may themselves be using the cache?
What if you do a large search that didn't find a blind spot and so now the non-blind-spot eval gets cached with huge number of visits and is now is a bad bias that is super-hard to overcome?

So in making use of the cache soft enough to try to mitigate these kinds of things, it also results in the cache not being perfectly sharp at re-using the value. It's a just smooth bias - so you should NOT expect child nodes to be "pinned" at the winrate you got from a deeper search after resolving a blind spot or misevaluation. Also, the cache is based on the exact position, using the same hash as used for graph search, so it will not generalize to fixing the same blind spot as soon as the opponent horizon-effects with an irrelevant ko threat, unless you also search those positions and get them cached too.

This is rough and experimental. The cache has no maximum size (so will go on eating more and more memory over time), can't be cleared, and can't be saved right now either. I've only tested it with Lizzie, but the change also shouldn't care about GTP vs analysis engine, should work with both as long as you're repeating searches.

The min number of visits to cache defaults to 100 if not changed, users with powerful GPUs that are running longer might benefit from increasing this number to ensure that evals that get do cached are of better quality, although I'm not entirely sure.
user_com.png hope366 time.png 2024/10/10(Thu) 11:30 No.1612
Re: 新機能搭載の実験版KataGoエンジン
Pull requests#992で発表されていましたね
簡単に言うと今までより分析が詳細になり速度が速くなる(可能性がある)ということでしょうか
hope366さん、ありがとうございました
user_com.png Eba time.png 2024/10/10(Thu) 11:46 No.1613
Re: 新機能搭載の実験版KataGoエンジン
これまで使用して来ていますが、理解が間違っていたようです。
設定>初期設定>Lizzieキャッシュのところにある選択で「可」を選べば、そこまでの検討を残して、他の手の検討を切り捨てて先を読んでいるのかと思っていました。
「?」をクリックして説明を読むと、これまでの過去の手に関する検討を残すか?どうかの設定だったんですね。
なるほど、推奨手通りに進めたときに、訪問数が0から再スタートする理由が分かった気がします。

上記の理解で間違っていれば、ご指摘ください。
user_com.png 初心者 time.png 2024/10/12(Sat) 11:27 No.1614
処理 記事No 暗証キー

- JoyfulNote -
edited by Eba