Metric Expansion for SpikeForest

Just curious, what is the plan to expand metrics on the website? Will you start to add metrics that quantify the what is going on with unmatched units? Do you have an idea of which metrics you might add and a rough timeline of that process?

Also, when will the website be open to the general public (not on the beta website)?

Love the website and am excited to see it grow!

The plan is for the website to go to version 1.0 at spikeforest.flatironinstitute.org some time within the next couple of weeks.

Regarding metrics… people have suggested a number of things, and I think it will be important to add them. Some measure of false positive units… and a best-case-greedy-merging metric to mimic the best-case manual merging strategy.

The best way to add those is to first make an example notebook, or python functions, that compute them efficiently, and then I can put it into the pipeline. It wouldn’t take more than a couple of devoted days to do that, but not sure where it falls on priority list at this point.

1 Like

Thanks @mhhennig, @colehurwitz, @alejoe91 for today’s discussion. It’s clear that we need to capture false positive units (either redundant or extra units) in the spikeforest output – perhaps even in the default view.

For example, kilosort2 has very high accuracy in many cases, but a more careful analysis reveals that there are many extra and redundant units found – so it’s important to display that in a clear way somehow.

@samuelgarcia, i may need your help extracting this info from the comparison with ground truth, and some ideas on how to best display it on the website.

1 Like

@pierre, the above may be of interest to you (our plans to report false positive and redundant units). Not sure if SC reports a lot of extra false units, or if there is an auto-exclusion phase that a user could employ.

@magland : I can make a NB to show how to exploit theses WIP metrics.

@samuelgarcia, that would be great.

@magland Yes, I made an option in SC such that the software, during its merging step, could also remove obvious noise templates and redundant units. By default, I don’t activate it because you usually don’t know what noise is with real data, but we could activate it in next release of SpikeForest and for synthetic dataset. Actually, your benchmarks are helping us to make this meta merging step more general and robust, as this was the final block we never dared to automatized!

Great!

Yeah, once we have the new metric/eval in place for penalizing extra false positive units, we can enable that feature for the SF run.