-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to save sorted spikes time stamp in spikeinterface #3599
Comments
the time/sorted spikes are stored in the sorting objects. You can get them as numpy arrays by doing sorting_spycir2.get_all_spike_trains() |
Thanks very much, it worked, I am sorry to bother can you kindly help me with how to save this as a text or excel sheet I am new to python |
Does it have to be an text or csv. You can save the sorting object and then you'll always have the spike times. Are you planning to switch to a different programming environment? |
Hi Zach
Can you please tell me how to save it as text or csv as well as saving it
as a sorting object .
*Are you planning to switch to a different programming environment? *
No wanted to send it to a collaborator who can read only text or excel files
Thanks very much
…On Wed, Jan 8, 2025 at 6:34 PM Zach McKenzie ***@***.***> wrote:
Does it have to be an text or csv. You can save the sorting object and
then you'll always have the spike times. Are you planning to switch to a
different programming environment?
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XGQC2TUXFSEG4FG3Q32JUO5JAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZXGYZDIMRQGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
To save it as a sorting object all you have to do is type: sorting.save(xx) where the xx will be the format you want and the location to save it to. For the excel or text how do you need the data organized? The easiest would honestly be just a couple columns of the label and the spike time and then you could index into the excel to get the spike train for each neuron. Again that would require some programming experience. I guess the other way would be to save each neuron as a separate column/row of an excel/txt file. This would be super messy and not as storage friendly. What is the experience level of your collaborators? Or how do they want to interact with the data? |
Dear Zach
Thanks very much I will give details here
I ran spykingcircus2, now first I want to determine whether the algorithm
has detected the spikes in the data, for this purpose for example as given
below my collaborator has marked a spike in the raw data, now I want to
give him the spikes detected for channel 4 by the algorithm along with
their spike timings so that he can determine along the channel 4 data
whether the algorithm can detect all the spikes he is looking for. For this
purpose can you help me in saving the results in a excel sheet so that he
can open the excel sheet and see these details, I am new to python and my
collaborator is comfortable with excel only
[image: image.png]
…On Wed, Jan 8, 2025 at 10:19 PM Zach McKenzie ***@***.***> wrote:
To save it as a sorting object all you have to do is type:
sorting.save(xx)
where the xx will be the format you want and the location to save it to.
For the excel or text how do you need the data organized? The easiest
would honestly be just a couple columns of the label and the spike time and
then you could index into the excel to get the spike train for each neuron.
Again that would require some programming experience.
I guess the other way would be to save each neuron as a separate
column/row of an excel/txt file. This would be super messy and not as
storage friendly. What is the experience level of your collaborators? Or
how do they want to interact with the data?
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XDOB6AA2K2EUUZC3RT2JVJJJAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZYGE2DSMZZGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
@venkatbits thanks for that info. It actually changes a lot. So at a fundamental level a sorting contains two vectors of information-- the spike times (ie when the spikes occurred) and the spike labels (we call them unit ids in spikeinterface, but some people prefer cluster ids or calling them neuron ids). Our sorting object also has other information like segment info (which doesn't matter for monosegment things) as well as some metadata. What we don't explicitly have is "channel 4". This is because one unit/neuron can generate spikes on multiple channels. So if you're just counting spikes on channel 4 and on channel 3 then they will likely be some of the same neurons and so you would be double counting spikes unfairly (unless you channels are super isolated). So our strategy is to generate unit locations based on one of three computational strategies during post processing to give you the location of your unit. But this isn't spikes/channel. Rather the unit location is one more piece of information to go along with each unit it. (I can share docs on how to use our analyzer if you are interested). But in your case if you care about spikes/channel then we really need to ask why? Are you're channels completed isolated such that each channel can't "see" what is on other channels? In this case you don't even necessarily need to spike sort. You could just threshold the data yourself (or use our peak detection tools). Or rather than just think about spikes along channel four we could just give you all the spike times and then your collaborator can look at his spike times and see which ones match? So you are trying to manually validate SC2? Sorry we have to ask so many questions but how we organize the data for your collaborator really depends on what you want to do with the data. |
Dear Zach
Many thanks for your detailed email
Actually this is what I want* "Or rather than just think about spikes along
channel four we could just give you all the spike times and then your
collaborator can look at his spike times and see which ones match? So you
are trying to manually validate SC2?"*
because out of 16 channels the recording was done in channel 4 only. So I
want to do spike detection in this channel 4 only and verify it manually.
However, the spike sorting algorithms in the spike interface needed probe
locations and further many of the algorithms did not detect/failed to do
spike detection when I consider only 1 channel i.e. channel 4. So if you
can help me by suggesting an appropriate approach (relevant steps if you
can guide me it will be great as I am new to this research area) to detect
spikes in channel 4 only and after spike detection I would like to give the
spike timing of all the detected in channel 4 to my collaborator for manual
verification.
Many thanks for your time and help
Venkat
…On Thu, Jan 9, 2025 at 7:05 PM Zach McKenzie ***@***.***> wrote:
@venkatbits <https://github.com/venkatbits> thanks for that info. It
actually changes a lot.
So at a fundamental level a sorting contains two vectors of information--
the spike times (ie when the spikes occurred) and the spike labels (we call
them unit ids in spikeinterface, but some people prefer cluster ids or
calling them neuron ids). Our sorting object also has other information
like segment info (which doesn't matter for monosegment things) as well as
some metadata. What we don't explicitly have is "channel 4". This is
because one unit/neuron can generate spikes on multiple channels. So if
you're just counting spikes on channel 4 and on channel 3 then they will
likely be some of the same neurons and so you would be double counting
spikes unfairly (unless you channels are super isolated).
So our strategy is to generate unit locations based on one of three
computational strategies during post processing to give you the location of
your unit. But this isn't spikes/channel. Rather the unit location is one
more piece of information to go along with each unit it. (I can share docs
on how to use our analyzer if you are interested).
But in your case if you care about spikes/channel then we really need to
ask why? Are you're channels completed isolated such that each channel
can't "see" what is on other channels? In this case you don't even
necessarily need to spike sort. You could just threshold the data yourself
(or use our peak detection tools).
Or rather than just think about spikes along channel four we could just
give you all the spike times and then your collaborator can look at his
spike times and see which ones match? So you are trying to manually
validate SC2?
Sorry we have to ask so many questions but how we organize the data for
your collaborator really depends on what you want to do with the data.
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XHRZDLUSRSQW3GFQGL2JZ3ITAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBQGE3TEMZQGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
yeah of course. @samuelgarcia or @alejoe91 will In this case though your current sorting object should all be on channel 4, so if you want to save whatever you do have then you could take the spike_vector with: import pandas as pd
# this collects the information as two long balanced vectors
spike_vector = sorting.to_spike_vector()
spike_times = spike_vector["sample_index"]
spike_indices = spike_vector["unit_index"]
# this makes sure the numbers are coordinated between the sorting object and the excel
spike_unit_ids = np.array([sorting.unit_ids[unit] for unit in spike_indices])
# dataframes are similar to excels
df = pd.DataFrame({"Spike Times": spike_times, "Unit Id": spike_unit_ids})
# this will create a csv, but you will need to fill out the location and name to fit with what you want
# for example name of file and location. See the Pandas documentation for arguments you need to add
df.to_csv() |
Dear Zach
Many thanks again
Can you also suggest to me which of the sorting algorithms is suitable for
single channel data as you have rightly mentioned SC2 failed to detect any
spikes when I used only channel 4
Thanks for the script I will check it tomorrow morning when I go to work
and will let you know
Thanks very much again
Venkat
…On Thu, Jan 9, 2025 at 9:33 PM Zach McKenzie ***@***.***> wrote:
yeah of course. @samuelgarcia <https://github.com/samuelgarcia> or
@alejoe91 <https://github.com/alejoe91> will detect_peaks work on one
channel only? Or @yger <https://github.com/yger> do you have any opinions
on whether SC2 will actually work with monotrode data?
In this case though your current sorting object should all be on channel
4, so if you want to save whatever you do have then you could take the
spike_vector with:
import pandas as pd
# this collects the information as two long balanced vectorsspike_vector = sorting.to_spike_vector()spike_times = spike_vector["sample_index"]spike_indices = spike_vector["unit_index"]
# this makes sure the numbers are coordinated between the sorting object and the excelspike_unit_ids = np.array([sorting.unit_ids[unit] for unit in spike_indices])
# dataframes are similar to excelsdf = pd.DataFrame({"Spike Times": spike_times, "Unit Id": spike_unit_ids})# this will create a csv, but you will need to fill out the location and name to fit with what you want# for example name of file and location. See the Pandas documentation for arguments you need to adddf.to_csv()
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XH6RIVITC2ZAVJCGHT2J2MUPAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBQGY2TIOJTHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
I have used the |
Dear Jake
Thanks so much, if you can share the code it will be great
Thanks a lot for your help
Venkat
…On Sat, Jan 11, 2025 at 8:32 PM Jake Swann ***@***.***> wrote:
I have used the detect_peaks() function with the 'by_channel' method to
look at all detected peaks channel-by-channel in the past, to get a rough
sense for how much a spike sorter might be missing, then created a sorting
object from this and exported to phy to do some super manual cluster
cutting, where essentially each channels detected peaks were treated as a
'unit' initially. This seemed to work well for that particular purpose, and
I can share some code if that would be useful
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XDKLGL6N3Y3GWPEU5L2KEXCBAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBVGI4TKNJQGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
Yes detect_peaks works with one unique channel. And this should be faster unit_indices = spike_vector["unit_index"]
spike_unit_ids = sorting.unit_ids[unit_indices] |
Dear Gracia
Thanks a lot
Venkat
…On Mon, Jan 13, 2025 at 3:13 PM Garcia Samuel ***@***.***> wrote:
Yes detect_peaks works with one unique channel.
And this should be faster
unit_indices = spike_vector["unit_index"]spike_unit_ids = sorting.unit_ids[unit_indices]
—
Reply to this email directly, view it on GitHub
<#3599 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOF64XG44IFZKMHVYX7WXQT2KODFPAVCNFSM6AAAAABUZS7G5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBWGYZDMOBVGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
The information contained in this electronic communication is intended
solely for the individual(s) or entity to which it is addressed. It may
contain proprietary, confidential and/or legally privileged information.
Any review, retransmission, dissemination, printing, copying or other use
of, or taking any action in reliance on the contents of this information by
person(s) or entities other than the intended recipient is strictly
prohibited and may be unlawful. If you have received this communication in
error, please notify us by responding to this email or telephone and
immediately and permanently delete all copies of this message and any
attachments from your system(s). The contents of this message do not
necessarily represent the views or policies of BITS Pilani.
|
Hi All
I have searched extensively in the spike interface documentation and web but unable to get this information on how to save all the sorted spike times hence raising this issue
I am running the following commands on my data
sorting_spycir2 = ss.run_sorter(sorter_name="spykingcircus2", recording=recording_seg, output_folder="C:/Users//Desktop/spike/folder_spykingcircus2_all_chanels")
folder = ' C:/Users//Desktop/spike/waveforms_spycir2_all'
we_spycir2_all= si.extract_waveforms(recording_seg, sorting_spycir2, folder, load_if_exists=None, ms_before=1, ms_after=2., max_spikes_per_unit=500, n_jobs=1, chunk_size=30000)
print(we_spycir2_all)
sorting_analyzer = si.create_sorting_analyzer(sorting=sorting_spycir2, recording=recording_seg)
sorting_analyzer.compute(['random_spikes', 'waveforms', 'templates', 'noise_levels'])
sorting_analyzer.compute(['spike_locations'])
export_report(sorting_analyzer=sorting_analyzer, output_folder='C:/Users/Desktop/ spike /folder_spykingcircus_allchanels_report')
In the above generated report I could not find the time stamps of each detected spike
I want to save all the detected/sorted spikes time stamps. Can you kindly help with this
thanks
Venkat
The text was updated successfully, but these errors were encountered: