url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
2.19B
node_id
stringlengths
18
24
number
int64
2
6.73k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
https://api.github.com/repos/huggingface/datasets/issues/4118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4118/comments
https://api.github.com/repos/huggingface/datasets/issues/4118/events
https://github.com/huggingface/datasets/issues/4118
1,195,638,944
I_kwDODunzps5HRACg
4,118
Failing CI tests on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[]
2022-04-07T07:36:25
2022-04-07T07:57:13
2022-04-07T07:57:13
MEMBER
null
null
null
## Describe the bug Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4118/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4117/comments
https://api.github.com/repos/huggingface/datasets/issues/4117/events
https://github.com/huggingface/datasets/issues/4117
1,195,552,406
I_kwDODunzps5HQq6W
4,117
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
{ "login": "arymbe", "id": 4567991, "node_id": "MDQ6VXNlcjQ1Njc5OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arymbe", "html_url": "https://github.com/arymbe", "followers_url": "https://api.github.com/users/arymbe/followers", "following_url": "https://api.github.com/users/arymbe/following{/other_user}", "gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}", "starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arymbe/subscriptions", "organizations_url": "https://api.github.com/users/arymbe/orgs", "repos_url": "https://api.github.com/users/arymbe/repos", "events_url": "https://api.github.com/users/arymbe/events{/privacy}", "received_events_url": "https://api.github.com/users/arymbe/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.", "Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n-...
2022-04-07T05:52:36
2024-02-15T14:11:35
2022-04-19T15:36:35
NONE
null
null
null
## Describe the bug Could you help me please. I got this following error. AttributeError: module 'huggingface_hub' has no attribute 'hf_api' ## Steps to reproduce the bug when I imported the datasets # Sample code to reproduce the bug from datasets import list_datasets, load_dataset, list_metrics, load_metric ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.3-x86_64-i386-64bit - Python version: 3.8.9 - PyArrow version: 7.0.0 - Pandas version: 1.3.5 - Huggingface-hub: 0.5.0 - Transformers: 4.18.0 Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4117/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4115/comments
https://api.github.com/repos/huggingface/datasets/issues/4115/events
https://github.com/huggingface/datasets/issues/4115
1,194,907,555
I_kwDODunzps5HONej
4,115
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ", "Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ", "I think they should...
2022-04-06T17:29:43
2022-06-01T13:04:16
2022-06-01T13:04:16
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large. **Describe the solution you'd like** maybe have an option `ignore` or something .gitignore style `dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")` **Describe alternatives you've considered** Could filter out manually
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4115/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4114/comments
https://api.github.com/repos/huggingface/datasets/issues/4114/events
https://github.com/huggingface/datasets/issues/4114
1,194,855,345
I_kwDODunzps5HOAux
4,114
Allow downloading just some columns of a dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess", "Actually for csv pandas has `usecols` which allows loading a subset of columns in a more effici...
2022-04-06T16:38:46
2024-02-21T11:29:35
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case **Describe the solution you'd like** Be able to just download some columns of a dataset, such as doing ```python load_dataset("huggan/wikiart",columns=["artist", "genre"]) ``` Although this might make things a bit complicated in terms of local caching of datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4114/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4113/comments
https://api.github.com/repos/huggingface/datasets/issues/4113/events
https://github.com/huggingface/datasets/issues/4113
1,194,843,532
I_kwDODunzps5HN92M
4,113
Multiprocessing with FileLock fails in python 3.9
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Closing this one because it must be used this way actually:\r\n```python\r\ndef main():\r\n with FileLock(\"tmp.lock\"):\r\n with Pool(2) as pool:\r\n pool.map(run, range(2))\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```" ]
2022-04-06T16:27:09
2022-11-28T11:49:14
2022-11-28T11:49:14
MEMBER
null
null
null
On python 3.9, this code hangs: ```python from multiprocessing import Pool from filelock import FileLock def run(i): print(f"got the lock in multi process [{i}]") with FileLock("tmp.lock"): with Pool(2) as pool: pool.map(run, range(2)) ``` This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python. This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well. Let's see if we can fix this and have a CI that runs on 3.9. cc @mariosasko @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4113/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4112/comments
https://api.github.com/repos/huggingface/datasets/issues/4112/events
https://github.com/huggingface/datasets/issues/4112
1,194,752,765
I_kwDODunzps5HNnr9
4,112
ImageFolder with Grayscale images dataset
{ "login": "chainyo", "id": 50595514, "node_id": "MDQ6VXNlcjUwNTk1NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chainyo", "html_url": "https://github.com/chainyo", "followers_url": "https://api.github.com/users/chainyo/followers", "following_url": "https://api.github.com/users/chainyo/following{/other_user}", "gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}", "starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainyo/subscriptions", "organizations_url": "https://api.github.com/users/chainyo/orgs", "repos_url": "https://api.github.com/users/chainyo/repos", "events_url": "https://api.github.com/users/chainyo/events{/privacy}", "received_events_url": "https://api.github.com/users/chainyo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n...
2022-04-06T15:10:00
2022-04-22T10:21:53
2022-04-22T10:21:52
NONE
null
null
null
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP) I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback: ```bash AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__ return self._getitem( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem formatted_output = format_table( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__ return self.format_row(pa_table) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested mapped = [ File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested return function(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' ``` I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well): ```python train_dataset = load_dataset("imagefolder", data_dir="data/train") train_dataset = train_dataset["train"] test_dataset = load_dataset("imagefolder", data_dir="data/test") test_dataset = test_dataset["train"] val_dataset = load_dataset("imagefolder", data_dir="data/val") val_dataset = val_dataset["train"] dataset = DatasetDict({ "train": train_dataset, "val": val_dataset, "test": test_dataset }) dataset.push_to_hub("ChainYo/rvl-cdip") ``` Now here is the code I am using to get the dataset and prepare it for training: ```python img_size = 512 batch_size = 128 normalize = [(0.5), (0.5)] data_dir = "ChainYo/rvl-cdip" dataset = load_dataset(data_dir, split="train") transforms = transforms.Compose([ transforms.Resize(img_size), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize(*normalize) ]) transformed_dataset = dataset.with_transform(transforms) transformed_dataset.set_format(type="torch", device="cuda") train_dataloader = torch.utils.data.DataLoader( transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True ) ``` But this get me the error above. I don't understand why it's doing this kind of weird thing? Do I need to map something on the dataset? Something like this: ```python labels = dataset.features["label"].names num_labels = dataset.features["label"].num_classes def preprocess_data(examples): images = [ex.convert("RGB") for ex in examples["image"]] labels = [ex for ex in examples["label"]] return {"images": images, "labels": labels} features = Features({ "images": Image(decode=True, id=None), "labels": ClassLabel(num_classes=num_labels, names=labels) }) decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4112/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4107/comments
https://api.github.com/repos/huggingface/datasets/issues/4107/events
https://github.com/huggingface/datasets/issues/4107
1,194,484,885
I_kwDODunzps5HMmSV
4,107
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
{ "login": "Pavithree", "id": 23344465, "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavithree", "html_url": "https://github.com/Pavithree", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "repos_url": "https://api.github.com/users/Pavithree/repos", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting. I'm looking at it", " It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDown...
2022-04-06T11:37:15
2022-04-08T07:13:07
2022-04-06T14:39:55
NONE
null
null
null
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows **Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive* *This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error: Status code: 400 Exception: ArrowInvalid Message: Exceeded maximum rows When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error* Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4107/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4105/comments
https://api.github.com/repos/huggingface/datasets/issues/4105/events
https://github.com/huggingface/datasets/issues/4105
1,194,297,119
I_kwDODunzps5HL4cf
4,105
push to hub fails with huggingface-hub 0.5.0
{ "login": "frascuchon", "id": 2518789, "node_id": "MDQ6VXNlcjI1MTg3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frascuchon", "html_url": "https://github.com/frascuchon", "followers_url": "https://api.github.com/users/frascuchon/followers", "following_url": "https://api.github.com/users/frascuchon/following{/other_user}", "gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}", "starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions", "organizations_url": "https://api.github.com/users/frascuchon/orgs", "repos_url": "https://api.github.com/users/frascuchon/repos", "events_url": "https://api.github.com/users/frascuchon/events{/privacy}", "received_events_url": "https://api.github.com/users/frascuchon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arr...
2022-04-06T08:59:57
2022-04-13T14:30:47
2022-04-13T14:30:47
NONE
null
null
null
## Describe the bug `ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id" ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("rubrix/news_test") ds.push_to_hub("<your-user>/news_test", token="<your-token>") ``` ## Expected results The dataset is successfully uploaded ## Actual results An error validation is raised: ```bash if repo_id and (name or organization): > raise ValueError( "Only pass `repo_id` and leave deprecated `name` and " "`organization` to be None." E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - `huggingface-hub`: 0.5 - Platform: macOS - Python version: 3.8.12 - PyArrow version: 6.0.0 cc @adrinjalali
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4105/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4104/comments
https://api.github.com/repos/huggingface/datasets/issues/4104/events
https://github.com/huggingface/datasets/issues/4104
1,194,072,966
I_kwDODunzps5HLBuG
4,104
Add time series data - stock market
{ "login": "INF800", "id": 45640029, "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/INF800", "html_url": "https://github.com/INF800", "followers_url": "https://api.github.com/users/INF800/followers", "following_url": "https://api.github.com/users/INF800/following{/other_user}", "gists_url": "https://api.github.com/users/INF800/gists{/gist_id}", "starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/INF800/subscriptions", "organizations_url": "https://api.github.com/users/INF800/orgs", "repos_url": "https://api.github.com/users/INF800/repos", "events_url": "https://api.github.com/users/INF800/events{/privacy}", "received_events_url": "https://api.github.com/users/INF800/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ", "cc'ing @kashif and @NielsRogge for visibility!", "@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly poi...
2022-04-06T05:46:58
2022-04-11T09:07:10
null
NONE
null
null
null
## Adding a Time Series Dataset - **Name:** 2min ticker data for stock market - **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image - **Data:** Collected by myself from investing.com - **Motivation:** Test applicability of transformer based model on stock market / time series problem ![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4104/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4101/comments
https://api.github.com/repos/huggingface/datasets/issues/4101/events
https://github.com/huggingface/datasets/issues/4101
1,193,399,204
I_kwDODunzps5HIdOk
4,101
How can I download only the train and test split for full numbers using load_dataset()?
{ "login": "Nakkhatra", "id": 64383902, "node_id": "MDQ6VXNlcjY0MzgzOTAy", "avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nakkhatra", "html_url": "https://github.com/Nakkhatra", "followers_url": "https://api.github.com/users/Nakkhatra/followers", "following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}", "gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions", "organizations_url": "https://api.github.com/users/Nakkhatra/orgs", "repos_url": "https://api.github.com/users/Nakkhatra/repos", "events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}", "received_events_url": "https://api.github.com/users/Nakkhatra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only downloa...
2022-04-05T16:00:15
2022-04-06T13:09:01
null
NONE
null
null
null
How can I download only the train and test split for full numbers using load_dataset()? I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4101/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4099/comments
https://api.github.com/repos/huggingface/datasets/issues/4099/events
https://github.com/huggingface/datasets/issues/4099
1,193,253,768
I_kwDODunzps5HH5uI
4,099
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
{ "login": "andreybond", "id": 20210017, "node_id": "MDQ6VXNlcjIwMjEwMDE3", "avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreybond", "html_url": "https://github.com/andreybond", "followers_url": "https://api.github.com/users/andreybond/followers", "following_url": "https://api.github.com/users/andreybond/following{/other_user}", "gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreybond/subscriptions", "organizations_url": "https://api.github.com/users/andreybond/orgs", "repos_url": "https://api.github.com/users/andreybond/repos", "events_url": "https://api.github.com/users/andreybond/events{/privacy}", "received_events_url": "https://api.github.com/users/andreybond/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n ...
2022-04-05T14:42:38
2022-04-06T06:37:44
2022-04-06T06:35:54
NONE
null
null
null
## Describe the bug Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("nielsr/XFUN", "xfun.ja") ``` ## Expected results Dataset should be downloaded without exceptions ## Actual results Stack trace (for the second-time execution): Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477... Downloading data files: 100% 2/2 [00:00<00:00, 88.48it/s] Extracting data files: 100% 2/2 [00:00<00:00, 79.60it/s] UnicodeDecodeErrorTraceback (most recent call last) <ipython-input-31-79c26bd1109c> in <module> 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja") /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 ) 605 --> 606 # By default, return all splits 607 if split is None: 608 split = {s: s for s in self.info.splits} /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 692 Args: 693 split: `datasets.Split` which subset of the data to read. --> 694 695 Returns: 696 `Dataset` /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) /usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self) 252 if not self.disable: 253 self.display(check_delay=False) --> 254 255 def __iter__(self): 256 try: /usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self) 1183 for obj in iterable: 1184 yield obj -> 1185 return 1186 1187 mininterval = self.mininterval ~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths) 140 logger.info("Generating examples from = %s", filepath) 141 with open(filepath[0], "r") as f: --> 142 data = json.load(f) 143 144 for doc in data["documents"]: /usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 294 295 """ --> 296 return loads(fp.read(), 297 cls=cls, object_hook=object_hook, 298 parse_float=parse_float, parse_int=parse_int, /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 (but reproduced with many previous versions) - Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu - Python version: 3.6.9 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4099/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4096/comments
https://api.github.com/repos/huggingface/datasets/issues/4096/events
https://github.com/huggingface/datasets/issues/4096
1,193,165,229
I_kwDODunzps5HHkGt
4,096
Add support for streaming Zarr stores for hosted datasets
{ "login": "jacobbieker", "id": 7170359, "node_id": "MDQ6VXNlcjcxNzAzNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacobbieker", "html_url": "https://github.com/jacobbieker", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "repos_url": "https://api.github.com/users/jacobbieker/repos", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.h...
2022-04-05T13:38:32
2023-12-07T09:01:49
2022-04-21T08:12:58
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily. **Describe the solution you'd like** A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec. **Describe alternatives you've considered** Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users. Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4096/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4094/comments
https://api.github.com/repos/huggingface/datasets/issues/4094/events
https://github.com/huggingface/datasets/issues/4094
1,192,534,414
I_kwDODunzps5HFKGO
4,094
Helo Mayfrends
{ "login": "Budigming", "id": 102933353, "node_id": "U_kgDOBiKjaQ", "avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Budigming", "html_url": "https://github.com/Budigming", "followers_url": "https://api.github.com/users/Budigming/followers", "following_url": "https://api.github.com/users/Budigming/following{/other_user}", "gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}", "starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Budigming/subscriptions", "organizations_url": "https://api.github.com/users/Budigming/orgs", "repos_url": "https://api.github.com/users/Budigming/repos", "events_url": "https://api.github.com/users/Budigming/events{/privacy}", "received_events_url": "https://api.github.com/users/Budigming/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
2022-04-05T02:42:57
2022-04-05T07:16:42
2022-04-05T07:16:42
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4094/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4093/comments
https://api.github.com/repos/huggingface/datasets/issues/4093/events
https://github.com/huggingface/datasets/issues/4093
1,192,523,161
I_kwDODunzps5HFHWZ
4,093
elena-soare/crawled-ecommerce: missing dataset
{ "login": "seevaratnam", "id": 17519354, "node_id": "MDQ6VXNlcjE3NTE5MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seevaratnam", "html_url": "https://github.com/seevaratnam", "followers_url": "https://api.github.com/users/seevaratnam/followers", "following_url": "https://api.github.com/users/seevaratnam/following{/other_user}", "gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions", "organizations_url": "https://api.github.com/users/seevaratnam/orgs", "repos_url": "https://api.github.com/users/seevaratnam/repos", "events_url": "https://api.github.com/users/seevaratnam/events{/privacy}", "received_events_url": "https://api.github.com/users/seevaratnam/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.c...
null
[ "It's a bug! Thanks for reporting, I'm looking at it.", "By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.", "Fixed. See https://huggi...
2022-04-05T02:25:19
2022-04-12T09:34:53
2022-04-12T09:34:53
NONE
null
null
null
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4093/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4091/comments
https://api.github.com/repos/huggingface/datasets/issues/4091/events
https://github.com/huggingface/datasets/issues/4091
1,192,023,855
I_kwDODunzps5HDNcv
4,091
Build a Dataset One Example at a Time Without Loading All Data Into Memory
{ "login": "aravind-tonita", "id": 99340348, "node_id": "U_kgDOBevQPA", "avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aravind-tonita", "html_url": "https://github.com/aravind-tonita", "followers_url": "https://api.github.com/users/aravind-tonita/followers", "following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}", "gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}", "starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions", "organizations_url": "https://api.github.com/users/aravind-tonita/orgs", "repos_url": "https://api.github.com/users/aravind-tonita/repos", "events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}", "received_events_url": "https://api.github.com/users/aravind-tonita/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and usin...
2022-04-04T16:19:24
2022-04-20T14:31:00
2022-04-20T14:31:00
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.** **Describe the solution you'd like** I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand. ``` # Initialize an empty Dataset, possibly from a known schema. dataset = Dataset() # Read in examples one by one using a custom data streamer. for example_dict in custom_example_dict_streamer("/path/to/raw/data"): # Add this example to the dict but do not store it in memory. dataset.add_item(example_dict) # Save the final dataset to disk as an Arrow-backed dataset. dataset.save_to_disk("/path/to/dataset") ... # I'd like to be able to later `load_from_disk` and use the loaded Dataset # just like any other memory-mapped pyarrow-backed HuggingFace dataset... loaded_dataset = Dataset.load_from_disk("/path/to/dataset") loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"]) dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16) ... ``` **Describe alternatives you've considered** I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping. Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4091/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4086/comments
https://api.github.com/repos/huggingface/datasets/issues/4086/events
https://github.com/huggingface/datasets/issues/4086
1,191,373,374
I_kwDODunzps5HAuo-
4,086
Dataset viewer issue for McGill-NLP/feedbackQA
{ "login": "cslizc", "id": 54827718, "node_id": "MDQ6VXNlcjU0ODI3NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cslizc", "html_url": "https://github.com/cslizc", "followers_url": "https://api.github.com/users/cslizc/followers", "following_url": "https://api.github.com/users/cslizc/following{/other_user}", "gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}", "starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cslizc/subscriptions", "organizations_url": "https://api.github.com/users/cslizc/orgs", "repos_url": "https://api.github.com/users/cslizc/repos", "events_url": "https://api.github.com/users/cslizc/events{/privacy}", "received_events_url": "https://api.github.com/users/cslizc/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.", "thank you so much" ]
2022-04-04T07:27:20
2022-04-04T22:29:53
2022-04-04T08:01:45
NONE
null
null
null
## Dataset viewer issue for '*McGill-NLP/feedbackQA*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)* *short description of the issue* The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message: ``` Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4086/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
{ "login": "virilo", "id": 3381112, "node_id": "MDQ6VXNlcjMzODExMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/virilo", "html_url": "https://github.com/virilo", "followers_url": "https://api.github.com/users/virilo/followers", "following_url": "https://api.github.com/users/virilo/following{/other_user}", "gists_url": "https://api.github.com/users/virilo/gists{/gist_id}", "starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/virilo/subscriptions", "organizations_url": "https://api.github.com/users/virilo/orgs", "repos_url": "https://api.github.com/users/virilo/repos", "events_url": "https://api.github.com/users/virilo/events{/privacy}", "received_events_url": "https://api.github.com/users/virilo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted", "Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update...
2022-04-02T12:40:10
2022-09-17T02:18:03
2022-04-04T06:44:34
NONE
null
null
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4084/comments
https://api.github.com/repos/huggingface/datasets/issues/4084/events
https://github.com/huggingface/datasets/issues/4084
1,190,060,415
I_kwDODunzps5G7uF_
4,084
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
{ "login": "blackhat-coder", "id": 57095771, "node_id": "MDQ6VXNlcjU3MDk1Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blackhat-coder", "html_url": "https://github.com/blackhat-coder", "followers_url": "https://api.github.com/users/blackhat-coder/followers", "following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}", "gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}", "starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions", "organizations_url": "https://api.github.com/users/blackhat-coder/orgs", "repos_url": "https://api.github.com/users/blackhat-coder/repos", "events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}", "received_events_url": "https://api.github.com/users/blackhat-coder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated ...
2022-04-01T17:02:47
2022-04-04T07:24:37
2022-04-04T07:21:31
NONE
null
null
null
## Describe the bug Hi ### Error 1 Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' ### Error 2 `DataCollatorWithPadding` isn't imported ## Steps to reproduce the bug ```python import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], shuffle=True, batch_size=16, collate_fn=data_collator, ) ``` This is the same code on Huggingface.co ## Actual results TypeError: __init__() got an unexpected keyword argument 'return_tensors' ## Environment info - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.7 - PyArrow version: 6.0.0 - Pandas version: 1.4.1 >
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4084/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4080/comments
https://api.github.com/repos/huggingface/datasets/issues/4080/events
https://github.com/huggingface/datasets/issues/4080
1,189,667,296
I_kwDODunzps5G6OHg
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 2067388877, "...
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n`...
2022-04-01T11:34:28
2022-04-01T13:59:10
2022-04-01T13:59:10
CONTRIBUTOR
null
null
null
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10.4MB/s] Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size , total: 379.12 MiB) to ... Traceback (most recent call last): [315/390] File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module> train() File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train trainer.fit(model, datamodule=dm) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte rrupt return trainer_fn(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run self._data_connector.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre pare_data self.trainer.datamodule.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(*args, **kwargs) File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data raw_dsets = datasets.load_dataset(**load_dataset_kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4080/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4077/comments
https://api.github.com/repos/huggingface/datasets/issues/4077/events
https://github.com/huggingface/datasets/issues/4077
1,189,467,585
I_kwDODunzps5G5dXB
4,077
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[]
2022-04-01T08:49:13
2022-04-01T16:16:19
2022-04-01T16:16:19
CONTRIBUTOR
null
null
null
## Describe the bug When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine. Basically, I do: ``` from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="path_to_my_files") dataset.push_to_hub("dataset_name") # works fine, no errors reloaded_dataset = load_dataset("dataset_name") ``` and it returns: ``` /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4077/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4075/comments
https://api.github.com/repos/huggingface/datasets/issues/4075/events
https://github.com/huggingface/datasets/issues/4075
1,188,462,162
I_kwDODunzps5G1n5S
4,075
Add CCAgT dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, ...
closed
false
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.git...
null
[ "Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.", "HI, I was waiting to come out in the second version to do the implement...
2022-03-31T18:20:28
2022-07-06T19:03:42
2022-07-06T19:03:42
NONE
null
null
null
## Adding a Dataset - **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique - **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR). - **Paper:** https://doi.org/10.1109/cbms49503.2020.00110 - **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0 - **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4075/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4074/comments
https://api.github.com/repos/huggingface/datasets/issues/4074/events
https://github.com/huggingface/datasets/issues/4074
1,188,449,142
I_kwDODunzps5G1kt2
4,074
Error in google/xtreme_s dataset card
{ "login": "wranai", "id": 1048544, "node_id": "MDQ6VXNlcjEwNDg1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wranai", "html_url": "https://github.com/wranai", "followers_url": "https://api.github.com/users/wranai/followers", "following_url": "https://api.github.com/users/wranai/following{/other_user}", "gists_url": "https://api.github.com/users/wranai/gists{/gist_id}", "starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wranai/subscriptions", "organizations_url": "https://api.github.com/users/wranai/orgs", "repos_url": "https://api.github.com/users/wranai/repos", "events_url": "https://api.github.com/users/wranai/events{/privacy}", "received_events_url": "https://api.github.com/users/wranai/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 20673888...
closed
false
null
[]
null
[ "Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors...
2022-03-31T18:07:45
2022-04-01T08:12:56
2022-04-01T08:12:56
NONE
null
null
null
**Link:** https://huggingface.co/datasets/google/xtreme_s Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4074/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4071/comments
https://api.github.com/repos/huggingface/datasets/issues/4071/events
https://github.com/huggingface/datasets/issues/4071
1,187,587,683
I_kwDODunzps5GySZj
4,071
Loading issue for xuyeliu/notebookCDG dataset
{ "login": "Jun-jie-Huang", "id": 46160972, "node_id": "MDQ6VXNlcjQ2MTYwOTcy", "avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jun-jie-Huang", "html_url": "https://github.com/Jun-jie-Huang", "followers_url": "https://api.github.com/users/Jun-jie-Huang/followers", "following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}", "gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions", "organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs", "repos_url": "https://api.github.com/users/Jun-jie-Huang/repos", "events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}", "received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported format...
2022-03-31T06:36:29
2022-03-31T08:17:01
2022-03-31T08:16:16
NONE
null
null
null
## Dataset viewer issue for '*xuyeliu/notebookCDG*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)* *Couldn't load the xuyeliu/notebookCDG with provided scripts: * ``` from datasets import load_dataset dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl") ``` I get an error message as follows: FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4071/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4062/comments
https://api.github.com/repos/huggingface/datasets/issues/4062/events
https://github.com/huggingface/datasets/issues/4062
1,186,330,732
I_kwDODunzps5Gtfhs
4,062
Loading mozilla-foundation/common_voice_7_0 dataset failed
{ "login": "aapot", "id": 19529125, "node_id": "MDQ6VXNlcjE5NTI5MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aapot", "html_url": "https://github.com/aapot", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "organizations_url": "https://api.github.com/users/aapot/orgs", "repos_url": "https://api.github.com/users/aapot/repos", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "received_events_url": "https://api.github.com/users/aapot/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @aapot, thanks for reporting.\r\n\r\nWe are investigating the cause of this issue. We will keep you informed. ", "When making HTTP request from code line:\r\n```\r\nresponse = requests.get(f\"{_API_URL}/bucket/dataset/{path}/{use_cdn}\", timeout=10.0).json()\r\n```\r\nit cannot be decoded to JSON because it r...
2022-03-30T11:39:41
2022-06-21T07:36:23
2022-03-31T08:18:04
NONE
null
null
null
## Describe the bug I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN") ``` ## Expected results load `mozilla-foundation/common_voice_7_0` dataset succesfully ## Actual results ``` JSONDecodeError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 909 try: --> 910 return complexjson.loads(self.text, **kwargs) 911 except JSONDecodeError as e: /opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw) 524 and not use_decimal and not kw): --> 525 return _default_decoder.decode(s) 526 if cls is None: /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3) 369 s = str(s, self.encoding) --> 370 obj, end = self.raw_decode(s) 371 end = _w(s, end).end() /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3) 399 idx += 3 --> 400 return self.scan_once(s, idx=_w(s, idx).end()) JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: JSONDecodeError Traceback (most recent call last) /tmp/ipykernel_358/370980805.py in <module> 1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split ----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 ignore_verifications=ignore_verifications, 1691 try_from_hf_gcs=try_from_hf_gcs, -> 1692 use_auth_token=use_auth_token, 1693 ) 1694 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 if not downloaded_from_gcs: 605 self._download_and_prepare( --> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 607 ) 608 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1102 1103 def _download_and_prepare(self, dl_manager, verify_infos): -> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1105 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 670 split_dict = SplitDict(dataset_name=self.name) 671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 673 674 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager) 151 152 self._log_download(self.config.name, bundle_version, hf_auth_token) --> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) 154 155 if self.config.version < datasets.Version("5.0.0"): ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template) 130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'") 131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024 --> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json() 133 return response["url"] 134 /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 915 raise RequestsJSONDecodeError(e.message) 916 else: --> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) 918 919 @property JSONDecodeError: [Errno Expecting value] Not Found: 0 ``` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4062/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4061/comments
https://api.github.com/repos/huggingface/datasets/issues/4061/events
https://github.com/huggingface/datasets/issues/4061
1,186,317,071
I_kwDODunzps5GtcMP
4,061
Loading cnn_dailymail dataset failed
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -...
2022-03-30T11:29:02
2022-03-30T13:36:14
2022-03-30T13:36:14
NONE
null
null
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` ## Expected results load `cnn_dailymail` dataset succesfully ## Actual results failed to load and get error > NotADirectoryError: [Errno 20] Not a directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` 1.8.0: - Platform: Ubuntu-20.04 - Python version: 3.9.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4061/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4057/comments
https://api.github.com/repos/huggingface/datasets/issues/4057/events
https://github.com/huggingface/datasets/issues/4057
1,185,442,001
I_kwDODunzps5GqGjR
4,057
`load_dataset` consumes too much memory for audio + tar archives
{ "login": "JFCeron", "id": 50839826, "node_id": "MDQ6VXNlcjUwODM5ODI2", "avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JFCeron", "html_url": "https://github.com/JFCeron", "followers_url": "https://api.github.com/users/JFCeron/followers", "following_url": "https://api.github.com/users/JFCeron/following{/other_user}", "gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}", "starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions", "organizations_url": "https://api.github.com/users/JFCeron/orgs", "repos_url": "https://api.github.com/users/JFCeron/repos", "events_url": "https://api.github.com/users/JFCeron/events{/privacy}", "received_events_url": "https://api.github.com/users/JFCeron/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand th...
2022-03-29T21:38:55
2022-08-16T10:22:55
2022-08-16T10:22:55
NONE
null
null
null
## Description `load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists. ## Steps to reproduce the bug Here's my implementation of `_generate_examples`: ```python class MyDatasetBuilder(datasets.GeneratorBasedBuilder): DEFAULT_WRITER_BATCH_SIZE = 1 ... def _split_generators(self, dl_manager): archive_path = dl_manager.download(_DL_URLS[self.config.name]) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "audio_tarfile_path": archive_path["audio_tarfile"] }, ), ] def _generate_examples(self, audio_tarfile_path): key = 0 with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile: for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 ``` I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened. I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times. ```python import tarfile def generate_examples(): audio_tarfile = tarfile.open("audios.tar", mode="r|") key = 0 for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 if __name__ == "__main__": examples = generate_examples() for example in examples: pass ``` ## Expected results Memory consumption should be similar to the non-huggingface script. ## Actual results Process is killed after consuming too much memory. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4057/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4056/comments
https://api.github.com/repos/huggingface/datasets/issues/4056/events
https://github.com/huggingface/datasets/issues/4056
1,185,155,775
I_kwDODunzps5GpAq_
4,056
Unexpected behavior of _TempDirWithCustomCleanup
{ "login": "JonasGeiping", "id": 22680696, "node_id": "MDQ6VXNlcjIyNjgwNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JonasGeiping", "html_url": "https://github.com/JonasGeiping", "followers_url": "https://api.github.com/users/JonasGeiping/followers", "following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}", "gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}", "starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions", "organizations_url": "https://api.github.com/users/JonasGeiping/orgs", "repos_url": "https://api.github.com/users/JonasGeiping/repos", "events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}", "received_events_url": "https://api.github.com/users/JonasGeiping/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at r...
2022-03-29T16:58:22
2022-03-30T15:08:04
null
NONE
null
null
null
## Describe the bug This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side. When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect. ## Steps to reproduce the bug `_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected. For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`. ## Suggestion: I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust: Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4056/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4053/comments
https://api.github.com/repos/huggingface/datasets/issues/4053/events
https://github.com/huggingface/datasets/issues/4053
1,184,500,378
I_kwDODunzps5Gmgqa
4,053
Modify datatype from `int32` to `float` for pearsonr, spearmanr.
{ "login": "woodywarhol9", "id": 86637320, "node_id": "MDQ6VXNlcjg2NjM3MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woodywarhol9", "html_url": "https://github.com/woodywarhol9", "followers_url": "https://api.github.com/users/woodywarhol9/followers", "following_url": "https://api.github.com/users/woodywarhol9/following{/other_user}", "gists_url": "https://api.github.com/users/woodywarhol9/gists{/gist_id}", "starred_url": "https://api.github.com/users/woodywarhol9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woodywarhol9/subscriptions", "organizations_url": "https://api.github.com/users/woodywarhol9/orgs", "repos_url": "https://api.github.com/users/woodywarhol9/repos", "events_url": "https://api.github.com/users/woodywarhol9/events{/privacy}", "received_events_url": "https://api.github.com/users/woodywarhol9/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this." ]
2022-03-29T08:27:41
2022-03-29T14:02:20
2022-03-29T14:02:20
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** - Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'. **Describe the solution you'd like** - Considering that those metrics are widely used for the STS task(labels are in 'float' data type), it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4053/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4052/comments
https://api.github.com/repos/huggingface/datasets/issues/4052/events
https://github.com/huggingface/datasets/issues/4052
1,184,447,977
I_kwDODunzps5GmT3p
4,052
metric = metric_cls( TypeError: 'NoneType' object is not callable
{ "login": "klyuhang9", "id": 39409233, "node_id": "MDQ6VXNlcjM5NDA5MjMz", "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/klyuhang9", "html_url": "https://github.com/klyuhang9", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "repos_url": "https://api.github.com/users/klyuhang9/repos", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [2]: metric = load_metric('glue', 'rte')\r\nDownloading builder script: 5.76kB [00:00, 2.40MB/s]\r\n```\r\n\r\nCould you please, retry to load the metric? Sometimes there are temporary connectivity issues.\r\n\r\nFeel free to re...
2022-03-29T07:43:08
2022-03-29T14:06:01
2022-03-29T14:06:01
NONE
null
null
null
Hi, friend. I meet a problem. When I run the code: `metric = load_metric('glue', 'rte')` There is a problem raising: `metric = metric_cls( TypeError: 'NoneType' object is not callable ` I don't know why. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4052/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4051/comments
https://api.github.com/repos/huggingface/datasets/issues/4051/events
https://github.com/huggingface/datasets/issues/4051
1,184,400,179
I_kwDODunzps5GmIMz
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
{ "login": "klyuhang9", "id": 39409233, "node_id": "MDQ6VXNlcjM5NDA5MjMz", "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/klyuhang9", "html_url": "https://github.com/klyuhang9", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "repos_url": "https://api.github.com/users/klyuhang9/repos", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] ...
2022-03-29T07:00:31
2022-05-08T07:27:32
2022-03-29T08:29:25
NONE
null
null
null
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4051/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4048/comments
https://api.github.com/repos/huggingface/datasets/issues/4048/events
https://github.com/huggingface/datasets/issues/4048
1,183,804,576
I_kwDODunzps5Gj2yg
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
{ "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
{ "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false }
[ { "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "h...
null
[ "Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.", "Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSize...
2022-03-28T18:12:04
2022-04-08T12:29:30
2022-04-08T12:29:30
CONTRIBUTOR
null
null
null
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata. Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first. ## Steps to reproduce the bug ```python load_dataset('amazon_us_reviews', 'PC_v1_00') ``` ## Expected results Dataset is downloaded and extracted successfully. ## Actual results An split size exception is thrown. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4048/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4047/comments
https://api.github.com/repos/huggingface/datasets/issues/4047/events
https://github.com/huggingface/datasets/issues/4047
1,183,789,237
I_kwDODunzps5GjzC1
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
{ "login": "orkenstein", "id": 1461936, "node_id": "MDQ6VXNlcjE0NjE5MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orkenstein", "html_url": "https://github.com/orkenstein", "followers_url": "https://api.github.com/users/orkenstein/followers", "following_url": "https://api.github.com/users/orkenstein/following{/other_user}", "gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions", "organizations_url": "https://api.github.com/users/orkenstein/orgs", "repos_url": "https://api.github.com/users/orkenstein/repos", "events_url": "https://api.github.com/users/orkenstein/events{/privacy}", "received_events_url": "https://api.github.com/users/orkenstein/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is on...
2022-03-28T17:59:32
2022-04-01T18:24:57
2022-04-01T18:24:57
NONE
null
null
null
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].column_names dataset['train'].unique(dataset['train'].column_names[0]) ``` ## Expected results It would be nice to actually see unique items ## Actual results Error: ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) [<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>() 6 7 dataset['train'].column_names ----> 8 dataset['train'].unique(dataset['train'].column_names[0]) 5 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>]) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Google Collab - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4047/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4044/comments
https://api.github.com/repos/huggingface/datasets/issues/4044/events
https://github.com/huggingface/datasets/issues/4044
1,183,658,942
I_kwDODunzps5GjTO-
4,044
CLI dummy data generation is broken
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[]
2022-03-28T16:07:37
2022-03-31T14:59:06
2022-03-31T14:59:06
MEMBER
null
null
null
## Describe the bug We get a TypeError when running CLI dummy data generation: ```shell datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate ``` gives: ``` File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator) TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4044/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4041/comments
https://api.github.com/repos/huggingface/datasets/issues/4041/events
https://github.com/huggingface/datasets/issues/4041
1,183,599,461
I_kwDODunzps5GjEtl
4,041
Add support for IIIF in datasets
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle b...
2022-03-28T15:19:25
2022-04-05T18:20:53
null
MEMBER
null
null
null
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Interoperability Framework) > is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions. The tl;dr is that IIIF provides various specifications for implementing useful functionality for: - Institutions to make available images for various use cases - Users to have a consistent way of interacting/requesting these images - For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF). Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/ ## IIIF APIs IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/) ### IIIF Image API The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: ```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}``` A concrete example of this: ```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg``` As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return: ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg) We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg) We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg) A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size ## Why would/could this be useful for datasets? There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows: - images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller - can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use. - options for quality, rotation, the format can all be encoded in the URL request. These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact. ## What could this look like in datasets? I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach. ### Use through datasets scripts Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script: ```python ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg") ``` This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script. ### Support through dataset scripts (with some datasets support) This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like: ```python features = {"label": ClassLabel(names=['dog','cat']), "url": datasets.IIIFURL()} ``` inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset. ### Other possible integrations Some other possible pseudocode ways that a user could interact with IIIF URLs: The ability to cast to an `IIIFImage` feature type: ``` ds.cast_column('url', IIIFImage, download=False) ``` The ability to specify some options associated with IIIF urls. ``` ds = ds.set_iiif_options(column='url', size="250,250") ``` I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways. ## prerequisite requirements There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support: ### support for handling failed images loaded via a URL (or a specific IIIFImage feature). Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs. ```python from datasets import Dataset import datasets urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3 urls.append("badurl.com/image.jpg") data = {"url":urls} ds = Dataset.from_dict(data) ds = ds.cast_column('url', datasets.Image()) ds[3]['url'] ``` returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this. ### Caching support Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs. ### Support for Parsing IIIF URLs This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share. ## Why it might not be worthwhile/suitable for datasets There are some reasons that this might not be worth implementing: - currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models. - It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble. - The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from. Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets. ## Suggested next steps: I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4041/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4037/comments
https://api.github.com/repos/huggingface/datasets/issues/4037/events
https://github.com/huggingface/datasets/issues/4037
1,183,144,486
I_kwDODunzps5GhVom
4,037
Error while building documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160", "Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504" ]
2022-03-28T09:22:44
2022-03-28T10:01:52
2022-03-28T10:00:48
MEMBER
null
null
null
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4037/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4032/comments
https://api.github.com/repos/huggingface/datasets/issues/4032/events
https://github.com/huggingface/datasets/issues/4032
1,182,595,697
I_kwDODunzps5GfPpx
4,032
can't download cats_vs_dogs dataset
{ "login": "RRaphaell", "id": 74569835, "node_id": "MDQ6VXNlcjc0NTY5ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74569835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RRaphaell", "html_url": "https://github.com/RRaphaell", "followers_url": "https://api.github.com/users/RRaphaell/followers", "following_url": "https://api.github.com/users/RRaphaell/following{/other_user}", "gists_url": "https://api.github.com/users/RRaphaell/gists{/gist_id}", "starred_url": "https://api.github.com/users/RRaphaell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RRaphaell/subscriptions", "organizations_url": "https://api.github.com/users/RRaphaell/orgs", "repos_url": "https://api.github.com/users/RRaphaell/repos", "events_url": "https://api.github.com/users/RRaphaell/events{/privacy}", "received_events_url": "https://api.github.com/users/RRaphaell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Thnaks for reporting @RRaphaell.\r\n\r\nWe are fixing it. " ]
2022-03-27T17:05:39
2022-03-28T07:44:24
2022-03-28T07:44:24
NONE
null
null
null
## Describe the bug can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results loaded successfully. ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip'] ## Environment info fresh google colab notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4032/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4031/comments
https://api.github.com/repos/huggingface/datasets/issues/4031/events
https://github.com/huggingface/datasets/issues/4031
1,182,415,124
I_kwDODunzps5GejkU
4,031
Cannot load the dataset conll2012_ontonotesv5
{ "login": "cathyxl", "id": 8326473, "node_id": "MDQ6VXNlcjgzMjY0NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cathyxl", "html_url": "https://github.com/cathyxl", "followers_url": "https://api.github.com/users/cathyxl/followers", "following_url": "https://api.github.com/users/cathyxl/following{/other_user}", "gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}", "starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions", "organizations_url": "https://api.github.com/users/cathyxl/orgs", "repos_url": "https://api.github.com/users/cathyxl/repos", "events_url": "https://api.github.com/users/cathyxl/events{/privacy}", "received_events_url": "https://api.github.com/users/cathyxl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @cathyxl, thanks for reporting.\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitH...
2022-03-27T07:38:23
2022-03-28T06:58:31
2022-03-28T06:31:18
NONE
null
null
null
## Describe the bug Cannot load the dataset conll2012_ontonotesv5 ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test") print(dataset) ``` ## Expected results The datasets should be downloaded successfully ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4031/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4029/comments
https://api.github.com/repos/huggingface/datasets/issues/4029/events
https://github.com/huggingface/datasets/issues/4029
1,181,057,011
I_kwDODunzps5GZX_z
4,029
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! You can access the faiss index with\r\n```python\r\nfaiss_index = my_dataset.get_index(\"my_index_name\").faiss_index\r\n```\r\nand then do whatever you want with it, e.g. query it using range_search:\r\n```python\r\nthreshold = 0.95\r\nlimits, distances, indices = faiss_index.range_search(x=xq, thresh=thresh...
2022-03-25T17:31:33
2022-05-06T08:35:52
2022-05-06T08:35:52
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly. **Describe the solution you'd like** dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold. It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold. see details here: https://github.com/facebookresearch/faiss/issues/1273 **Describe alternatives you've considered** I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets **Additional context** The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4029/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4027/comments
https://api.github.com/repos/huggingface/datasets/issues/4027/events
https://github.com/huggingface/datasets/issues/4027
1,180,991,344
I_kwDODunzps5GZH9w
4,027
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi, @MoritzLaurer, thanks for reporting.\r\n\r\nNormally this is due to a mismatch between the versions of your Elasticsearch client and server:\r\n- your ES client is passing only keyword arguments to your ES server\r\n- whereas your ES server expects a positional argument called 'scheme'\r\n\r\nIn order to fix t...
2022-03-25T16:22:28
2022-04-07T10:29:52
2022-03-28T07:58:56
NONE
null
null
null
## Describe the bug I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch ``` from datasets import load_dataset squad = load_dataset('crime_and_punish', split='train[:1000]') ``` When I run the line: `squad.add_elasticsearch_index("context", host="localhost", port="9200")` I get the error: `TypeError: __init__() missing 1 required positional argument: 'scheme'` ## Expected results No error message ## Actual results ``` TypeError Traceback (most recent call last) [<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>() 1 import elasticsearch ----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200") 6 frames [/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux, Google Colab - Python version: Google Colab (probably 3.7) - PyArrow version: ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4027/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4025/comments
https://api.github.com/repos/huggingface/datasets/issues/4025/events
https://github.com/huggingface/datasets/issues/4025
1,180,963,105
I_kwDODunzps5GZBEh
4,025
Missing argument in precision/recall
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. " ]
2022-03-25T15:55:52
2022-03-28T09:53:06
2022-03-28T09:53:06
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117) Same issue is present for Recall. **Describe the solution you'd like** Support for **kwargs or adding a new field for `zero_division`. **Describe alternatives you've considered** I could filter the warnings myself, but that is not ideal. **Additional context** I can make the requested changes if this is approved.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4025/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4015/comments
https://api.github.com/repos/huggingface/datasets/issues/4015/events
https://github.com/huggingface/datasets/issues/4015
1,180,510,856
I_kwDODunzps5GXSqI
4,015
Can not correctly parse the classes with imagefolder
{ "login": "YiSyuanChen", "id": 21264909, "node_id": "MDQ6VXNlcjIxMjY0OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YiSyuanChen", "html_url": "https://github.com/YiSyuanChen", "followers_url": "https://api.github.com/users/YiSyuanChen/followers", "following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}", "gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions", "organizations_url": "https://api.github.com/users/YiSyuanChen/orgs", "repos_url": "https://api.github.com/users/YiSyuanChen/repos", "events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}", "received_events_url": "https://api.github.com/users/YiSyuanChen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.", "HI, I have a question. How much time did you load the ImageNet data files? " ]
2022-03-25T08:51:17
2022-03-28T01:02:03
2022-03-25T09:27:56
NONE
null
null
null
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg - ...... - n01695060/ - ...... - val/ - n01440764/ - n01695060/ - ...... ``` At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as: ``` from datasets import load_dataset data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'} ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification") ``` but it resulted following error (I mask my personal path as <PERSONAL_PATH>): ``` FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` Next, I followed a recent issue #3960 to load data as: ``` from datasets import load_dataset data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']} ds = load_dataset("imagefolder", data_files=data_files, task="image-classification") ``` and the data can be loaded without error as: (I copy val folder to train folder for illustration) ``` >>> ds DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) val: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) }) ``` However, the parsed classes is wrong (should be 1000 classes): ``` >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)} ``` ## Expected results I expect that the "labels" in ds["train"].features should contain 1000 classes. ## Actual results The "labels" in ds["train"].features contains only 1 wrong class. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu 18.04 - Python version: Python 3.7.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4015/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
https://api.github.com/repos/huggingface/datasets/issues/4013/events
https://github.com/huggingface/datasets/issues/4013
1,180,427,174
I_kwDODunzps5GW-Om
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
{ "login": "hazalturkmen", "id": 42860397, "node_id": "MDQ6VXNlcjQyODYwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hazalturkmen", "html_url": "https://github.com/hazalturkmen", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file...
2022-03-25T07:12:02
2022-04-04T08:05:01
2022-03-25T14:16:11
NONE
null
null
null
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)" ]
2022-03-24T15:13:38
2022-03-24T15:46:38
2022-03-24T15:17:29
NONE
null
null
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4007/comments
https://api.github.com/repos/huggingface/datasets/issues/4007/events
https://github.com/huggingface/datasets/issues/4007
1,179,381,021
I_kwDODunzps5GS-0d
4,007
set_format does not work with multi dimension tensor
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n", "Hi...
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
NONE
null
null
null
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4007/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4005/comments
https://api.github.com/repos/huggingface/datasets/issues/4005/events
https://github.com/huggingface/datasets/issues/4005
1,179,365,663
I_kwDODunzps5GS7Ef
4,005
Yelp not working
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"tr...
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
CONTRIBUTOR
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4005/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4003/comments
https://api.github.com/repos/huggingface/datasets/issues/4003/events
https://github.com/huggingface/datasets/issues/4003
1,179,286,877
I_kwDODunzps5GSn1d
4,003
ASSIN2 dataset checksum bug
{ "login": "ruanchaves", "id": 14352388, "node_id": "MDQ6VXNlcjE0MzUyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanchaves", "html_url": "https://github.com/ruanchaves", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "repos_url": "https://api.github.com/users/ruanchaves/repos", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help...
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
CONTRIBUTOR
null
null
null
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) [<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>() ----> 1 load_dataset('assin2') 4 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'] ``` ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("assin2") ``` ## Expected results Load the dataset. ## Actual results The dataset won't load. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Google Colab - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4003/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4001/comments
https://api.github.com/repos/huggingface/datasets/issues/4001/events
https://github.com/huggingface/datasets/issues/4001
1,179,231,418
I_kwDODunzps5GSaS6
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
{ "login": "gsk1692", "id": 1963097, "node_id": "MDQ6VXNlcjE5NjMwOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsk1692", "html_url": "https://github.com/gsk1692", "followers_url": "https://api.github.com/users/gsk1692/followers", "following_url": "https://api.github.com/users/gsk1692/following{/other_user}", "gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions", "organizations_url": "https://api.github.com/users/gsk1692/orgs", "repos_url": "https://api.github.com/users/gsk1692/repos", "events_url": "https://api.github.com/users/gsk1692/events{/privacy}", "received_events_url": "https://api.github.com/users/gsk1692/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.", "Thank You! Was able to solve with the help of this.", "But I request you to please fix the same in the data...
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
NONE
null
null
null
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format') Error: Status code: 400 Exception: TypeError Message: argument of type 'Value' is not iterable Kindly advice.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4001/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4000/comments
https://api.github.com/repos/huggingface/datasets/issues/4000/events
https://github.com/huggingface/datasets/issues/4000
1,178,844,616
I_kwDODunzps5GQ73I
4,000
load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there...
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
NONE
null
null
null
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e... AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. 100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s] 100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s] 100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s] 100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s] 100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s] 100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4000/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3996/comments
https://api.github.com/repos/huggingface/datasets/issues/3996/events
https://github.com/huggingface/datasets/issues/3996
1,178,415,905
I_kwDODunzps5GPTMh
3,996
Audio.encode_example() throws an error when writing example from array
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "...
null
[ "Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do", "Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 fil...
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
CONTRIBUTOR
null
null
null
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>` ## Steps to reproduce the bug ### Sample code to reproduce the bug ```python # download sample file !wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3 arr, sr = librosa.load("common_voice_vi_21824030.mp3") Audio().encode_example({ "path": "common_voice_vi_21824030.mp3", "array": arr, "sampling_rate":sr }) ``` ## Expected results An encoded example (`{"bytes": b'....', "path": 'path'}`) ## Actual results ```python TypeError Traceback (most recent call last) Input In [3], in <module> 1 arr, sr = librosa.load("common_voice_vi_21824030.mp3") ----> 3 Audio().encode_example({ 4 "path": "common_voice_vi_21824030.mp3", 5 "array": arr, 6 "sampling_rate":sr 7 }) File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value) 73 elif isinstance(value, dict) and "array" in value: 74 buffer = BytesIO() ---> 75 sf.write(buffer, value["array"], value["sampling_rate"]) 76 return {"bytes": buffer.getvalue(), "path": value.get("path")} 77 elif value.get("bytes") is not None or value.get("path") is not None: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd) 312 else: 313 channels = data.shape[1] --> 314 with SoundFile(file, 'w', samplerate, channels, 315 subtype, endian, format, closefd) as f: 316 f.write(data) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 625 mode_int = _check_mode(mode) 626 self._mode = mode --> 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian) 1414 original_format = format 1415 if format is None: -> 1416 format = _get_format_from_filename(file, mode) 1417 assert isinstance(format, (_unicode, str)) 1418 else: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode) 1455 pass 1456 if format.upper() not in _formats and 'r' not in mode: -> 1457 raise TypeError("No format specified and unable to get format from " 1458 "file extension: {0!r}".format(file)) 1459 return format TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets master - Platform: Ubuntu 20.04 - Python version: python 3.8.12 - PyArrow version: 6.0.1 ## Solution I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this: ```python sf.write(buffer, value["array"], value["sampling_rate"], format="wav") ``` BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this: ```python from datasets import load_dataset, Features, Audio ds = load_dataset("common_voice", "vi", split="test") ds = ds.remove_columns("audio") ds.select(range(3)) # 3 samples just for testing def load_mp3_with_librosa(example): arr, sr = librosa.load(example["path"]) example["audio"] = { "path": example["path"], "array": arr, "sampling_rate": sr } return example updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example), features=Features( {"audio": Audio(decode=False)} )) ``` @lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3996/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3993/comments
https://api.github.com/repos/huggingface/datasets/issues/3993/events
https://github.com/huggingface/datasets/issues/3993
1,178,201,495
I_kwDODunzps5GOe2X
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
{ "login": "jpilaul", "id": 614861, "node_id": "MDQ6VXNlcjYxNDg2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jpilaul", "html_url": "https://github.com/jpilaul", "followers_url": "https://api.github.com/users/jpilaul/followers", "following_url": "https://api.github.com/users/jpilaul/following{/other_user}", "gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions", "organizations_url": "https://api.github.com/users/jpilaul/orgs", "repos_url": "https://api.github.com/users/jpilaul/repos", "events_url": "https://api.github.com/users/jpilaul/events{/privacy}", "received_events_url": "https://api.github.com/users/jpilaul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Same thing occurs when streaming files loaded from disk.", "Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)...
2022-03-23T14:27:29
2023-02-28T14:14:24
null
NONE
null
null
null
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True) it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True) de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True) multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset]) multilingual_dataset = multilingual_dataset.with_format('torch') next(iter(multilingual_dataset)) # works fairly fast dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4) for batch in dataloader: print(len(batch)) # prints nothing after 30 min of waiting dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0) for batch in dataloader: print(len(batch)) # prints right away ``` ## Expected results It should be able to iterate the dataset with multiple workers. ## Actual results Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - `pytorch` version: 1.10.0+cu113 - Python version: 3.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3993/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3992/comments
https://api.github.com/repos/huggingface/datasets/issues/3992/events
https://github.com/huggingface/datasets/issues/3992
1,177,946,153
I_kwDODunzps5GNggp
3,992
Image column is not decoded in map when using with with_transform
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https:...
null
[ "Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with...
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
NONE
null
null
null
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3992/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3991/comments
https://api.github.com/repos/huggingface/datasets/issues/3991/events
https://github.com/huggingface/datasets/issues/3991
1,177,362,901
I_kwDODunzps5GLSHV
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, ...
open
false
null
[]
null
[]
2022-03-22T22:16:05
2022-03-23T12:57:16
null
NONE
null
null
null
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.* - **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)* - **Motivation:** *Key dataset in the healthcare community* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). FYI @osanseviero @abidlabs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3991/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3990/comments
https://api.github.com/repos/huggingface/datasets/issues/3990/events
https://github.com/huggingface/datasets/issues/3990
1,176,976,247
I_kwDODunzps5GJzt3
3,990
Improve AutomaticSpeechRecognition task template
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "There is an open PR to do that: #3364. I just haven't had time to finish it... ", "> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n😬 thanks..." ]
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created). **Describe the solution you'd like** Change audio columns from string path to Audio feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3990/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
https://api.github.com/repos/huggingface/datasets/issues/3986/events
https://github.com/huggingface/datasets/issues/3986
1,176,429,565
I_kwDODunzps5GHuP9
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
{ "login": "kelvinAI", "id": 10686779, "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kelvinAI", "html_url": "https://github.com/kelvinAI", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "repos_url": "https://api.github.com/users/kelvinAI/repos", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?", "Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datas...
2022-03-22T08:23:21
2023-03-06T16:55:04
null
NONE
null
null
null
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear and concise description of what the bug is. Issue: - Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory - No error code, had to terminate the process - There are some files created in the cache directory: ``` custom_cache_dir | -- modules | -- __init__.py | -- datasets_modules | -- __init__.py | -- datasets | -- __init__.py | -- script.py (Dataset loading script) | -- script.lock ``` There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk. ## Steps to reproduce the bug What I've tried: - Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703) - Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html) - Modifying cache_dir param during runtime ```python >>> from datasets import load_dataset >>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache') ``` - Disabling dataset cache ```python >>> from datasets import set_caching_enabled >>> set_caching_enabled(False) ``` ## Expected results Datasets should load / cache as usual with the only exception that cache directory is different ## Actual results Any actions taken above to change the cache directory results in loading indefinitely without terminating. ## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3985/comments
https://api.github.com/repos/huggingface/datasets/issues/3985/events
https://github.com/huggingface/datasets/issues/3985
1,175,982,937
I_kwDODunzps5GGBNZ
3,985
[image feature] Too many files open error when image feature is returned as a path
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
CONTRIBUTOR
null
null
null
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue. ## Steps to reproduce the bug Pull the PR locally and run the following code ```python from datasets import load_dataset dataset = load_dataset("./datasets/textvqa")["train"] data = [item for item in dataset] # Error happens ``` ## Expected results List comprehension should work smoothly ## Actual results `Too many open files error` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.10.0 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3985/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3984/comments
https://api.github.com/repos/huggingface/datasets/issues/3984/events
https://github.com/huggingface/datasets/issues/3984
1,175,822,117
I_kwDODunzps5GFZ8l
3,984
Local and automatic tests fail
{ "login": "MarkusSagen", "id": 20767068, "node_id": "MDQ6VXNlcjIwNzY3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkusSagen", "html_url": "https://github.com/MarkusSagen", "followers_url": "https://api.github.com/users/MarkusSagen/followers", "following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}", "gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions", "organizations_url": "https://api.github.com/users/MarkusSagen/orgs", "repos_url": "https://api.github.com/users/MarkusSagen/repos", "events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}", "received_events_url": "https://api.github.com/users/MarkusSagen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacreb...
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
NONE
null
null
null
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected results All tests passing ## Actual results ``` tests/test_metric_common.py:91: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run exec(compile(example.source, filename, "single", <doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module> ??? ../datasets/src/datasets/metric.py:430: in compute output = self._compute(**inputs, **compute_kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references) >>> print(results) {'score': 0.0, 'num_edits': 0, 'ref_length': 6.5} """, stored examples: 0) predictions = ['hello there general kenobi', 'foo bar foobar'] references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']] normalized = False, no_punct = False, asian_support = False, case_sensitive = False def _compute( self, predictions, references, normalized: bool = False, no_punct: bool = False, asian_support: bool = False, case_sensitive: bool = False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > sb_ter = TER(normalized, no_punct, asian_support, case_sensitive) E TypeError: __init__() takes 2 positional arguments but 5 were given /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError ------------------------------ Captured stdout call ------------------------------- Trying: predictions = ["hello there general kenobi", "foo bar foobar"] Expecting nothing ok Trying: references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] Expecting nothing ok Trying: ter = datasets.load_metric("ter") Expecting nothing ok Trying: results = ter.compute(predictions=predictions, references=references) Expecting nothing ================================ warnings summary ================================= ../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15 /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import load_source ../datasets/src/datasets/commands/test.py:35 /home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py) class TestCommand(BaseDatasetsCLICommand): tests/commands/test_test.py:33 /home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py) class TestCommandArgs: tests/test_arrow_dataset.py: 760 warnings tests/test_formatting.py: 60 warnings tests/test_search.py: 31 warnings tests/features/test_array_xd.py: 117 warnings /home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) tests/test_arrow_dataset.py: 154 warnings tests/features/test_array_xd.py: 1 warning /home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) tests/test_arrow_dataset.py: 60 warnings /home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations elif np.issubdtype(values.dtype, np.str): tests/test_arrow_dataset.py: 138 warnings tests/test_formatting.py: 21 warnings /home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations data_struct.dtype == np.object tests/test_arrow_dataset.py: 240 warnings tests/test_formatting.py: 20 warnings /home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects tests/test_arrow_dataset.py: 12 warnings tests/test_search.py: 2 warnings tests/features/test_array_xd.py: 6 warnings tests/features/test_image.py: 4 warnings /home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations [0] + [len(arr) for arr in l_arr], dtype=np.object tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77 /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~ _CITATION = """\ tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \= _CITATION = """\ tests/test_filesystem.py: 105 warnings /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly warn( tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs /home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more. lax._check_user_dtype_supported(dtype, "array") tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features _audio /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations dtype=np.complex, tests/features/test_array_xd.py::test_array_xd_with_none /home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,) -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type... ``` ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3984/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3983/comments
https://api.github.com/repos/huggingface/datasets/issues/3983/events
https://github.com/huggingface/datasets/issues/3983
1,175,759,412
I_kwDODunzps5GFKo0
3,983
Infinitely attempting lock
{ "login": "jyrr", "id": 11869652, "node_id": "MDQ6VXNlcjExODY5NjUy", "avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jyrr", "html_url": "https://github.com/jyrr", "followers_url": "https://api.github.com/users/jyrr/followers", "following_url": "https://api.github.com/users/jyrr/following{/other_user}", "gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}", "starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jyrr/subscriptions", "organizations_url": "https://api.github.com/users/jyrr/orgs", "repos_url": "https://api.github.com/users/jyrr/repos", "events_url": "https://api.github.com/users/jyrr/events{/privacy}", "received_events_url": "https://api.github.com/users/jyrr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest vers...
2022-03-21T18:11:57
2022-05-06T16:12:18
2022-05-06T16:12:18
NONE
null
null
null
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3983/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3978/comments
https://api.github.com/repos/huggingface/datasets/issues/3978/events
https://github.com/huggingface/datasets/issues/3978
1,175,226,456
I_kwDODunzps5GDIhY
3,978
I can't view HFcallback dataset for ASR Space
{ "login": "kingabzpro", "id": 36753484, "node_id": "MDQ6VXNlcjM2NzUzNDg0", "avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kingabzpro", "html_url": "https://github.com/kingabzpro", "followers_url": "https://api.github.com/users/kingabzpro/followers", "following_url": "https://api.github.com/users/kingabzpro/following{/other_user}", "gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}", "starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions", "organizations_url": "https://api.github.com/users/kingabzpro/orgs", "repos_url": "https://api.github.com/users/kingabzpro/repos", "events_url": "https://api.github.com/users/kingabzpro/events{/privacy}", "received_events_url": "https://api.github.com/users/kingabzpro/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" sr...
2022-03-21T11:07:49
2023-09-25T12:19:53
null
NONE
null
null
null
## Dataset viewer issue for '*Urdu-ASR-flags*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)* *I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.* Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3978/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3977/comments
https://api.github.com/repos/huggingface/datasets/issues/3977/events
https://github.com/huggingface/datasets/issues/3977
1,175,049,927
I_kwDODunzps5GCdbH
3,977
Adapt `docs/README.md` for datasets
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. " ]
2022-03-21T08:26:49
2023-02-27T10:32:37
2023-02-27T10:32:37
CONTRIBUTOR
null
null
null
## Describe the bug Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3977/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3973/comments
https://api.github.com/repos/huggingface/datasets/issues/3973/events
https://github.com/huggingface/datasets/issues/3973
1,174,455,431
I_kwDODunzps5GAMSH
3,973
ConnectionError and SSLError
{ "login": "yanyu2015", "id": 11142054, "node_id": "MDQ6VXNlcjExMTQyMDU0", "avatar_url": "https://avatars.githubusercontent.com/u/11142054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanyu2015", "html_url": "https://github.com/yanyu2015", "followers_url": "https://api.github.com/users/yanyu2015/followers", "following_url": "https://api.github.com/users/yanyu2015/following{/other_user}", "gists_url": "https://api.github.com/users/yanyu2015/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanyu2015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanyu2015/subscriptions", "organizations_url": "https://api.github.com/users/yanyu2015/orgs", "repos_url": "https://api.github.com/users/yanyu2015/repos", "events_url": "https://api.github.com/users/yanyu2015/events{/privacy}", "received_events_url": "https://api.github.com/users/yanyu2015/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```", "it works,but another error occ...
2022-03-20T06:45:37
2022-03-30T08:13:32
2022-03-30T08:13:32
NONE
null
null
null
code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module> ----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1658 1659 # Create a dataset builder -> 1660 builder_instance = load_dataset_builder( 1661 path=path, 1662 name=name, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1484 download_config = download_config.copy() if download_config else DownloadConfig() 1485 download_config.use_auth_token = use_auth_token -> 1486 dataset_module = dataset_module_factory( 1487 path, 1488 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1237 ) from None -> 1238 raise e1 from None 1239 else: 1240 raise FileNotFoundError( D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now 1174 # TODO(QL): use a Hub dataset module factory instead of GitHub -> 1175 return GithubDatasetModuleFactory( 1176 path, 1177 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self) 531 revision = self.revision 532 try: --> 533 local_path = self.download_loading_script(revision) 534 except FileNotFoundError: 535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision) 511 if download_config.download_desc is None: 512 download_config.download_desc = "Downloading builder script" --> 513 return cached_path(file_path, download_config=download_config) 514 515 def download_dataset_infos_file(self, revision: Optional[str]) -> str: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 232 if is_remote_url(url_or_filename): 233 # URL, so get it from the cache (downloading if necessary) --> 234 output_path = get_from_cache( 235 url_or_filename, 236 cache_dir=cache_dir, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 581 if head_error is not None: --> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") 583 elif response is not None: 584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))"))) ``` It may be caused by Caused by SSLError(in China?) because it works well on google colab. So how can I download this dataset manually?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3973/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3969/comments
https://api.github.com/repos/huggingface/datasets/issues/3969/events
https://github.com/huggingface/datasets/issues/3969
1,174,273,824
I_kwDODunzps5F_f8g
3,969
Cannot preview cnn_dailymail dataset
{ "login": "hasan-besh", "id": 75482871, "node_id": "MDQ6VXNlcjc1NDgyODcx", "avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasan-besh", "html_url": "https://github.com/hasan-besh", "followers_url": "https://api.github.com/users/hasan-besh/followers", "following_url": "https://api.github.com/users/hasan-besh/following{/other_user}", "gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions", "organizations_url": "https://api.github.com/users/hasan-besh/orgs", "repos_url": "https://api.github.com/users/hasan-besh/repos", "events_url": "https://api.github.com/users/hasan-besh/events{/privacy}", "received_events_url": "https://api.github.com/users/hasan-besh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ", "Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` ...
2022-03-19T14:08:57
2022-04-20T15:52:49
2022-04-20T15:52:49
NONE
null
null
null
## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3969/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3968/comments
https://api.github.com/repos/huggingface/datasets/issues/3968/events
https://github.com/huggingface/datasets/issues/3968
1,174,193,962
I_kwDODunzps5F_Mcq
3,968
Cannot preview 'indonesian-nlp/eli5_id' dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.c...
null
[ "Hi @cahya-wirawan, thanks for reporting.\r\n\r\nYour dataset is working OK in streaming mode:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"indonesian-nlp/eli5_id\", split=\"train\", streaming=True)\r\n ...: item = next(iter(ds))\r\n ...: item\r\nUsing custom data con...
2022-03-19T06:54:09
2022-03-24T16:34:24
2022-03-24T16:34:24
CONTRIBUTOR
null
null
null
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3968/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3965/comments
https://api.github.com/repos/huggingface/datasets/issues/3965/events
https://github.com/huggingface/datasets/issues/3965
1,173,708,739
I_kwDODunzps5F9V_D
3,965
TypeError: Couldn't cast array of type for JSONLines dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)." ]
2022-03-18T15:17:53
2022-05-06T16:13:51
2022-05-06T16:13:51
MEMBER
null
null
null
## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again. ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl' data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset") # throws TypeError: Couldn't cast array of type dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas - note this take a while as the file is >2GB df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to pandas. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split writer.write_table(table) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast return cast_table_to_features(table, Features.from_arrow_schema(schema)) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]> to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3965/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3965/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3964/comments
https://api.github.com/repos/huggingface/datasets/issues/3964/events
https://github.com/huggingface/datasets/issues/3964
1,173,564,993
I_kwDODunzps5F8y5B
3,964
Add default Audio Loader
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "...
null
[]
2022-03-18T12:58:55
2022-08-22T14:20:46
2022-08-22T14:20:46
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Writing a custom loading dataset script might be a bit challenging for users. **Describe the solution you'd like** Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure. **Describe alternatives you've considered** Create a custom loading script? that's what users doing now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3964/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3961/comments
https://api.github.com/repos/huggingface/datasets/issues/3961/events
https://github.com/huggingface/datasets/issues/3961
1,173,223,086
I_kwDODunzps5F7fau
3,961
Scores from Index at extra positions are not filtered out
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Yes, that makes sense! Would you like to submit a PR to fix this?", "Created PR https://github.com/huggingface/datasets/pull/3971" ]
2022-03-18T06:13:23
2022-04-12T14:41:58
2022-04-12T14:41:58
CONTRIBUTOR
null
null
null
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too. Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3961/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3960/comments
https://api.github.com/repos/huggingface/datasets/issues/3960/events
https://github.com/huggingface/datasets/issues/3960
1,173,148,884
I_kwDODunzps5F7NTU
3,960
Load local dataset error
{ "login": "TXacs", "id": 60869411, "node_id": "MDQ6VXNlcjYwODY5NDEx", "avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TXacs", "html_url": "https://github.com/TXacs", "followers_url": "https://api.github.com/users/TXacs/followers", "following_url": "https://api.github.com/users/TXacs/following{/other_user}", "gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}", "starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TXacs/subscriptions", "organizations_url": "https://api.github.com/users/TXacs/orgs", "repos_url": "https://api.github.com/users/TXacs/repos", "events_url": "https://api.github.com/users/TXacs/events{/privacy}", "received_events_url": "https://api.github.com/users/TXacs/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg...
open
false
null
[]
null
[ "Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('image...
2022-03-18T03:32:49
2023-08-02T17:12:20
null
NONE
null
null
null
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification') [] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__ super().__init__(*args, **kwargs) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__ sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` I need some help to solve the problem, thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3960/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3959/comments
https://api.github.com/repos/huggingface/datasets/issues/3959/events
https://github.com/huggingface/datasets/issues/3959
1,172,872,695
I_kwDODunzps5F6J33
3,959
Medium-sized dataset conversion from pandas causes a crash
{ "login": "Antymon", "id": 641005, "node_id": "MDQ6VXNlcjY0MTAwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/641005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Antymon", "html_url": "https://github.com/Antymon", "followers_url": "https://api.github.com/users/Antymon/followers", "following_url": "https://api.github.com/users/Antymon/following{/other_user}", "gists_url": "https://api.github.com/users/Antymon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Antymon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Antymon/subscriptions", "organizations_url": "https://api.github.com/users/Antymon/orgs", "repos_url": "https://api.github.com/users/Antymon/repos", "events_url": "https://api.github.com/users/Antymon/events{/privacy}", "received_events_url": "https://api.github.com/users/Antymon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?", "@albertvillanova did you find a solution to this?", "I´m getting the same problem with some files, @albertvillanova did you find a solution to this?" ]
2022-03-17T20:20:35
2022-12-12T17:14:06
2022-04-20T12:35:37
NONE
null
null
null
Hi, I am suffering from the following issue: ## Describe the bug Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash: ``` File "/home/datasets_crash.py", line 7, in <module> arrow=datasets.Dataset.from_pandas(d) File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas table = InMemoryTable.from_pandas( File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas return cls(pa.Table.from_pandas(*args, **kwargs)) File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458) ``` ## Steps to reproduce the bug I have a dataset made from replicated single example mocking a dict representation of a publication. I copy over this example 140k times and create a pandas frame. I use 'Dataset.from_pandas' and boom ```python # Sample code to reproduce the bug import copy import datasets import pandas # serialized dict is quite long to be realistic representation of a publication content paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}") d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100)) arrow=datasets.Dataset.from_pandas(d) ``` ## Expected results The dataset should be converted without error. ## Actual results Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.18.4 pandas==1.3.5 - Platform: macOS 11.6 or CentOS Linux 7 (Core) - Python version: Python 3.9.7 - PyArrow version: pyarrow==3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3959/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3956/comments
https://api.github.com/repos/huggingface/datasets/issues/3956/events
https://github.com/huggingface/datasets/issues/3956
1,172,272,327
I_kwDODunzps5F33TH
3,956
TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "amirj", "id": 1645137, "node_id": "MDQ6VXNlcjE2NDUxMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amirj", "html_url": "https://github.com/amirj", "followers_url": "https://api.github.com/users/amirj/followers", "following_url": "https://api.github.com/users/amirj/following{/other_user}", "gists_url": "https://api.github.com/users/amirj/gists{/gist_id}", "starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amirj/subscriptions", "organizations_url": "https://api.github.com/users/amirj/orgs", "repos_url": "https://api.github.com/users/amirj/repos", "events_url": "https://api.github.com/users/amirj/events{/privacy}", "received_events_url": "https://api.github.com/users/amirj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_co...
2022-03-17T11:43:13
2023-11-21T04:26:20
2022-03-28T08:00:01
NONE
null
null
null
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset squad = load_dataset('squad', split='validation') squad.add_elasticsearch_index("context", host="localhost", port="9200") ``` ## Expected results [Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-8fb51aa33961> in <module> 1 from datasets import load_dataset 2 squad = load_dataset('squad', split='validation') ----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200") ~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 3777 """ 3778 with self.formatted_as(type=None, columns=[column]): -> 3779 super().add_elasticsearch_index( 3780 column=column, 3781 index_name=index_name, ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 587 """ 588 index_name = index_name if index_name is not None else column --> 589 es_index = ElasticSearchIndex( 590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config 591 ) ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config) 123 from elasticsearch import Elasticsearch # noqa: F811 124 --> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}]) 126 self.es_index_name = ( 127 es_index_name ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport) 310 311 if _transport is None: --> 312 node_configs = client_node_configs( 313 hosts, 314 cloud_id=cloud_id, ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs) 99 else: 100 assert hosts is not None --> 101 node_configs = hosts_to_node_configs(hosts) 102 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults. ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts) 142 143 elif isinstance(host, Mapping): --> 144 node_configs.append(host_mapping_to_node_config(host)) 145 else: 146 raise ValueError( ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Mac - Python version: 3.8.0 - PyArrow version: 7.0.0 - ElaticSearch Info: { "name" : "byname", "cluster_name" : "elasticsearch_brew", "cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA", "version" : { "number" : "7.10.2-SNAPSHOT", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "build_date" : "2021-01-16T01:41:27.115673Z", "build_snapshot" : true, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3956/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3956/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3954/comments
https://api.github.com/repos/huggingface/datasets/issues/3954/events
https://github.com/huggingface/datasets/issues/3954
1,172,141,664
I_kwDODunzps5F3XZg
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
{ "login": "MatanBenChorin", "id": 49593805, "node_id": "MDQ6VXNlcjQ5NTkzODA1", "avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatanBenChorin", "html_url": "https://github.com/MatanBenChorin", "followers_url": "https://api.github.com/users/MatanBenChorin/followers", "following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}", "gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions", "organizations_url": "https://api.github.com/users/MatanBenChorin/orgs", "repos_url": "https://api.github.com/users/MatanBenChorin/repos", "events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}", "received_events_url": "https://api.github.com/users/MatanBenChorin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.", "Hi, \r\nThank you", "Thanks for reporting. We are looking at it and will give upd...
2022-03-17T09:38:11
2022-04-20T12:39:07
2022-04-20T12:39:07
NONE
null
null
null
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3954/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3953/comments
https://api.github.com/repos/huggingface/datasets/issues/3953/events
https://github.com/huggingface/datasets/issues/3953
1,172,123,736
I_kwDODunzps5F3TBY
3,953
Add ImageNet Sketch
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, ...
closed
false
null
[]
null
[ "Can you assign this task to me? @nreimers @mariosasko ", "Hi! Sure! Let us know if you need any pointers." ]
2022-03-17T09:20:31
2022-05-23T18:05:29
2022-05-23T18:05:29
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** ImageNet Sketch - **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale. - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549) - **Data:** https://github.com/HaohanWang/ImageNet-Sketch - **Motivation:** Allows for evaluating the robustness of vision models. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3953/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3952/comments
https://api.github.com/repos/huggingface/datasets/issues/3952/events
https://github.com/huggingface/datasets/issues/3952
1,171,895,531
I_kwDODunzps5F2bTr
3,952
Checksum error for glue sst2, stsb, rte etc datasets
{ "login": "ravindra-ut", "id": 22090962, "node_id": "MDQ6VXNlcjIyMDkwOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/22090962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravindra-ut", "html_url": "https://github.com/ravindra-ut", "followers_url": "https://api.github.com/users/ravindra-ut/followers", "following_url": "https://api.github.com/users/ravindra-ut/following{/other_user}", "gists_url": "https://api.github.com/users/ravindra-ut/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravindra-ut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravindra-ut/subscriptions", "organizations_url": "https://api.github.com/users/ravindra-ut/orgs", "repos_url": "https://api.github.com/users/ravindra-ut/repos", "events_url": "https://api.github.com/users/ravindra-ut/events{/privacy}", "received_events_url": "https://api.github.com/users/ravindra-ut/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] ...
2022-03-17T03:45:47
2022-03-17T07:10:15
2022-03-17T07:10:14
NONE
null
null
null
## Describe the bug Checksum error for glue sst2, stsb, rte etc datasets ## Steps to reproduce the bug ```python >>> nlp.load_dataset('glue', 'sst2') Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 73.0/73.0 [00:00<00:00, 18.2kB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Expected results dataset load should succeed without checksum error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Environment info - `datasets` version: '1.18.3' - Platform: Mac OS - Python version: Python 3.8.9 - PyArrow version: '7.0.0'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3952/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3951/comments
https://api.github.com/repos/huggingface/datasets/issues/3951/events
https://github.com/huggingface/datasets/issues/3951
1,171,568,814
I_kwDODunzps5F1Liu
3,951
Forked streaming datasets try to `open` data urls rather than use network
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transf...
2022-03-16T21:21:02
2022-06-10T20:47:26
2022-06-10T20:47:26
NONE
null
null
null
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets import torch.utils.data # work around #3950 class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset): pass def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset: return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling) if __name__ == '__main__': freeze_support() ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) ds = _ensure_format(ds) model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results I'd expect the dataset to load the url correctly and produce examples. ## Actual results ``` warnings.warn( ***** Running training ***** Num examples = 8000 Num Epochs = 9223372036854775807 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__ for key, example in self._iter(): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter yield from ex_iterable File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz' Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll pid, sts = os.waitpid(self.pid, flag) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15. 0%| | 0/1000 [00:02<?, ?it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3951/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3950/comments
https://api.github.com/repos/huggingface/datasets/issues/3950/events
https://github.com/huggingface/datasets/issues/3950
1,171,560,585
I_kwDODunzps5F1JiJ
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also t...
2022-03-16T21:14:11
2022-06-10T20:47:26
2022-06-10T20:47:26
NONE
null
null
null
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3950/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3942/comments
https://api.github.com/repos/huggingface/datasets/issues/3942/events
https://github.com/huggingface/datasets/issues/3942
1,171,177,122
I_kwDODunzps5Fzr6i
3,942
reddit_tifu dataset: Checksums didn't match for dataset source files
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
null
[]
null
[ "Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773", "thank...
2022-03-16T15:23:30
2022-03-16T15:57:43
2022-03-16T15:39:25
NONE
null
null
null
## Describe the bug When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files" ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) # load_dataset('billsum') load_dataset('reddit_tifu', 'short') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3942/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3941/comments
https://api.github.com/repos/huggingface/datasets/issues/3941/events
https://github.com/huggingface/datasets/issues/3941
1,171,132,709
I_kwDODunzps5FzhEl
3,941
billsum dataset: Checksums didn't match for dataset source files:
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nl...
2022-03-16T14:52:08
2024-03-13T12:11:35
2022-03-16T15:46:44
NONE
null
null
null
## Describe the bug When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files" ``` File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx'] ``` ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) load_dataset('billsum') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3941/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3939/comments
https://api.github.com/repos/huggingface/datasets/issues/3939/events
https://github.com/huggingface/datasets/issues/3939
1,170,882,331
I_kwDODunzps5Fyj8b
3,939
Source links broken
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/", "@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin compari...
2022-03-16T11:17:47
2022-03-19T04:41:32
2022-03-19T04:41:32
CONTRIBUTOR
null
null
null
## Describe the bug The source links of v2.0.0 docs are broken: For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747` here, the `v2.0.0` should be `2.0.0`. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747` ## Actual results Described above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3939/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3937/comments
https://api.github.com/repos/huggingface/datasets/issues/3937/events
https://github.com/huggingface/datasets/issues/3937
1,170,832,006
I_kwDODunzps5FyXqG
3,937
Missing languages in lvwerra/github-code dataset
{ "login": "Eytan-S", "id": 38702500, "node_id": "MDQ6VXNlcjM4NzAyNTAw", "avatar_url": "https://avatars.githubusercontent.com/u/38702500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Eytan-S", "html_url": "https://github.com/Eytan-S", "followers_url": "https://api.github.com/users/Eytan-S/followers", "following_url": "https://api.github.com/users/Eytan-S/following{/other_user}", "gists_url": "https://api.github.com/users/Eytan-S/gists{/gist_id}", "starred_url": "https://api.github.com/users/Eytan-S/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Eytan-S/subscriptions", "organizations_url": "https://api.github.com/users/Eytan-S/orgs", "repos_url": "https://api.github.com/users/Eytan-S/repos", "events_url": "https://api.github.com/users/Eytan-S/events{/privacy}", "received_events_url": "https://api.github.com/users/Eytan-S/received_events", "type": "User", "site_admin": false }
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[ { "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.githu...
null
[ "Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ", "That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the...
2022-03-16T10:32:03
2022-03-22T07:09:23
2022-03-21T14:50:47
NONE
null
null
null
Hi, I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset! I've noticed that two languages are missing from the dataset: TypeScript and Scala. Looks like they're also omitted from the query you used to get the original code. Are there any plans to add them in the future? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3937/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3929/comments
https://api.github.com/repos/huggingface/datasets/issues/3929/events
https://github.com/huggingface/datasets/issues/3929
1,170,066,235
I_kwDODunzps5Fvcs7
3,929
Load a local dataset twice
{ "login": "caush", "id": 28349961, "node_id": "MDQ6VXNlcjI4MzQ5OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/caush", "html_url": "https://github.com/caush", "followers_url": "https://api.github.com/users/caush/followers", "following_url": "https://api.github.com/users/caush/following{/other_user}", "gists_url": "https://api.github.com/users/caush/gists{/gist_id}", "starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/caush/subscriptions", "organizations_url": "https://api.github.com/users/caush/orgs", "repos_url": "https://api.github.com/users/caush/repos", "events_url": "https://api.github.com/users/caush/events{/privacy}", "received_events_url": "https://api.github.com/users/caush/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_datase...
2022-03-15T18:59:26
2022-03-16T09:55:09
2022-03-16T09:54:06
NONE
null
null
null
## Describe the bug Load a local "dataset" composed of two csv files twice. ## Steps to reproduce the bug Put the two joined files in a repository named "Data". Then in python: import datasets as ds ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'}) ## Expected results Should give something like (because files have only one data row): Title, clicks Truc et astuce, 123 Machin, 12 ## Actual results Gives Title, clicks Truc et astuce, 123 Machin, 12 Truc et astuce, 123 Machin, 12 ## Environment info [file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv) [file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv) - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3929/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3928/comments
https://api.github.com/repos/huggingface/datasets/issues/3928/events
https://github.com/huggingface/datasets/issues/3928
1,170,017,132
I_kwDODunzps5FvQts
3,928
Frugal score deprecations
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. " ]
2022-03-15T18:10:42
2022-03-17T08:37:24
2022-03-17T08:37:24
NONE
null
null
null
## Describe the bug The frugal score returns a really verbose output with warnings that can be easily changed. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets.load import load_metric frugal = load_metric("frugalscore") frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"]) ``` ## Expected results A clear and concise description of the expected results. ``` {'scores': [0.9946]} ``` ## Actual results Specify the actual results or traceback. ``` PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 864.09ba/s] Using amp half precision backend The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Prediction ***** Num examples = 1 Batch size = 64 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4644.85it/s] {'scores': [0.9946]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3928/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3920/comments
https://api.github.com/repos/huggingface/datasets/issues/3920/events
https://github.com/huggingface/datasets/issues/3920
1,169,532,807
I_kwDODunzps5FtaeH
3,920
'datasets.features' is not a package
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets", "The problem I can no I have...
2022-03-15T11:14:23
2022-03-16T09:17:12
2022-03-16T09:17:12
NONE
null
null
null
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8.0 ``` During runing the code I have this error ``` [6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class [6]<stderr>: return super().find_class(mod_name, name) [6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` precisely this error appears when torch.load('data_file.pt') ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load result = unpickler.load() File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` Why I am getting this error?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3920/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3919/comments
https://api.github.com/repos/huggingface/datasets/issues/3919/events
https://github.com/huggingface/datasets/issues/3919
1,169,497,210
I_kwDODunzps5FtRx6
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
{ "login": "jswapnil10", "id": 48145785, "node_id": "MDQ6VXNlcjQ4MTQ1Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/48145785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jswapnil10", "html_url": "https://github.com/jswapnil10", "followers_url": "https://api.github.com/users/jswapnil10/followers", "following_url": "https://api.github.com/users/jswapnil10/following{/other_user}", "gists_url": "https://api.github.com/users/jswapnil10/gists{/gist_id}", "starred_url": "https://api.github.com/users/jswapnil10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jswapnil10/subscriptions", "organizations_url": "https://api.github.com/users/jswapnil10/orgs", "repos_url": "https://api.github.com/users/jswapnil10/repos", "events_url": "https://api.github.com/users/jswapnil10/events{/privacy}", "received_events_url": "https://api.github.com/users/jswapnil10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nRetu...
2022-03-15T10:46:59
2022-03-17T04:16:14
2022-03-17T04:16:14
NONE
null
null
null
## Describe the bug Receiving the error when trying to check for Dataset features ## Steps to reproduce the bug from datasets import Dataset dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']]) dataset.features ## Expected results A clear and concise description of the expected results. ## Actual results Getting the following errror AttributeError: 'DatasetDict' object has no attribute 'features' ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.18.4 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3919/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3919/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3918/comments
https://api.github.com/repos/huggingface/datasets/issues/3918/events
https://github.com/huggingface/datasets/issues/3918
1,169,366,117
I_kwDODunzps5Fsxxl
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
{ "login": "willowdong", "id": 51409295, "node_id": "MDQ6VXNlcjUxNDA5Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willowdong", "html_url": "https://github.com/willowdong", "followers_url": "https://api.github.com/users/willowdong/followers", "following_url": "https://api.github.com/users/willowdong/following{/other_user}", "gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}", "starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willowdong/subscriptions", "organizations_url": "https://api.github.com/users/willowdong/orgs", "repos_url": "https://api.github.com/users/willowdong/repos", "events_url": "https://api.github.com/users/willowdong/events{/privacy}", "received_events_url": "https://api.github.com/users/willowdong/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
null
[]
null
[ "Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "You should force redownload:\r\...
2022-03-15T08:53:45
2022-03-16T15:36:58
2022-03-15T14:01:25
NONE
null
null
null
## Describe the bug Can't load the dataset ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('multi_news') dataset_2=load_dataset("reddit_tifu", "long") ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info - `datasets` version: 1.18.4 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.0 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3918/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3909/comments
https://api.github.com/repos/huggingface/datasets/issues/3909/events
https://github.com/huggingface/datasets/issues/3909
1,168,578,058
I_kwDODunzps5FpxYK
3,909
Error loading file audio when downloading the Common Voice dataset directly from the Hub
{ "login": "aliceinland", "id": 30385910, "node_id": "MDQ6VXNlcjMwMzg1OTEw", "avatar_url": "https://avatars.githubusercontent.com/u/30385910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aliceinland", "html_url": "https://github.com/aliceinland", "followers_url": "https://api.github.com/users/aliceinland/followers", "following_url": "https://api.github.com/users/aliceinland/following{/other_user}", "gists_url": "https://api.github.com/users/aliceinland/gists{/gist_id}", "starred_url": "https://api.github.com/users/aliceinland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aliceinland/subscriptions", "organizations_url": "https://api.github.com/users/aliceinland/orgs", "repos_url": "https://api.github.com/users/aliceinland/repos", "events_url": "https://api.github.com/users/aliceinland/events{/privacy}", "received_events_url": "https://api.github.com/users/aliceinland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?", "I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_datase...
2022-03-14T15:53:50
2023-03-02T15:31:27
2023-03-02T15:31:26
NONE
null
null
null
## Describe the bug When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened. ## Steps to reproduce the bug ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "it", split="test") #test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'}) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) ``` ## Expected results The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library. ## Actual results The error is: ```python 0ex [00:00, ?ex/s] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-48-ef87f4129e6e> in <module> 7 return batch 8 ----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn) /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2107 2108 if num_proc is None or num_proc == 1: -> 2109 return self._map_single( 2110 function=function, 2111 with_indices=with_indices, /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 516 self: "Dataset" = kwargs.pop("self") 517 # apply actual function --> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 520 for dataset in datasets: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 483 } 484 # apply actual function --> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 487 # re-apply format to the output /opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2465 if not batched: 2466 for i, example in enumerate(pbar): -> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset) 2468 if update_data: 2469 if i == 0: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 2372 if with_rank: 2373 additional_args += (rank,) -> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 2375 if update_data is None: 2376 # Check if the function returns updated examples /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs) 2067 ) 2068 # Use the LazyDict internally, while mapping the function -> 2069 result = f(decorated_item, *args, **kwargs) 2070 # Return a standard dict 2071 return result.data if isinstance(result, LazyDict) else result <ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch) 3 def speech_file_to_array_fn(batch): 4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() ----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"]) 6 batch["speech"] = resampler(speech_array).squeeze().numpy() 7 return batch /opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ``` ## Environment info - `datasets` version: 1.18.4 - Platform: Linux-5.4.0-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3909/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3906/comments
https://api.github.com/repos/huggingface/datasets/issues/3906/events
https://github.com/huggingface/datasets/issues/3906
1,168,496,328
I_kwDODunzps5FpdbI
3,906
NonMatchingChecksumError on Spider dataset
{ "login": "kolk", "id": 9049591, "node_id": "MDQ6VXNlcjkwNDk1OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kolk", "html_url": "https://github.com/kolk", "followers_url": "https://api.github.com/users/kolk/followers", "following_url": "https://api.github.com/users/kolk/following{/other_user}", "gists_url": "https://api.github.com/users/kolk/gists{/gist_id}", "starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolk/subscriptions", "organizations_url": "https://api.github.com/users/kolk/orgs", "repos_url": "https://api.github.com/users/kolk/repos", "events_url": "https://api.github.com/users/kolk/events{/privacy}", "received_events_url": "https://api.github.com/users/kolk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @kolk, thanks for reporting.\r\n\r\nIndeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:\r\n- #3787 \r\n\r\nWe just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4\r\n\r\nPlease, feel free to...
2022-03-14T14:54:53
2022-03-15T07:09:51
2022-03-15T07:09:51
NONE
null
null
null
## Describe the bug Failure to generate dataset ```spider``` because of checksums error for dataset source files. ## Steps to reproduce the bug ``` from datasets import load_dataset spider = load_dataset("spider") ``` ## Expected results Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ## Actual results ``` >>> load_dataset("spider") load_dataset("spider") Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7... Traceback (most recent call last): File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-5-d4cb54197348>", line 1, in <module> load_dataset("spider") File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ``` ## Environment info datasets version: 1.18.3 Platform: Ubuntu 20 LTS Python version: 3.8.10 PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3906/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3904/comments
https://api.github.com/repos/huggingface/datasets/issues/3904/events
https://github.com/huggingface/datasets/issues/3904
1,167,730,095
I_kwDODunzps5FmiWv
3,904
CONLL2003 Dataset not available
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Thanks for reporting, @omarespejel.\r\n\r\nI'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip\r\n\r\nMight it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed no...
2022-03-13T23:46:15
2023-06-28T18:08:16
2022-03-17T08:21:32
NONE
null
null
null
## Describe the bug [CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip' ![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("conll2003") ``` ## Expected results Download the conll2003 dataset. ## Actual results Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3904/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3902/comments
https://api.github.com/repos/huggingface/datasets/issues/3902/events
https://github.com/huggingface/datasets/issues/3902
1,167,403,377
I_kwDODunzps5FlSlx
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
{ "login": "arunasank", "id": 3166852, "node_id": "MDQ6VXNlcjMxNjY4NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunasank", "html_url": "https://github.com/arunasank", "followers_url": "https://api.github.com/users/arunasank/followers", "following_url": "https://api.github.com/users/arunasank/following{/other_user}", "gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}", "starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arunasank/subscriptions", "organizations_url": "https://api.github.com/users/arunasank/orgs", "repos_url": "https://api.github.com/users/arunasank/repos", "events_url": "https://api.github.com/users/arunasank/events{/privacy}", "received_events_url": "https://api.github.com/users/arunasank/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`", "Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be ...
2022-03-12T21:22:03
2023-02-09T14:53:49
2022-03-22T07:10:41
NONE
null
null
null
## Describe the bug Unable to import datasets ## Steps to reproduce the bug ```python from datasets import Dataset, DatasetDict ``` ## Expected results The import works without errors ## Actual results ``` AttributeError Traceback (most recent call last) <ipython-input-37-c8cfcbe62127> in <module> 11 # from tqdm import tqdm 12 # import torch ---> 13 from datasets import Dataset 14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling 15 # from sentence_transformers import SentenceTransformer ~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module> 46 ) 47 ---> 48 import fsspec 49 import numpy as np 50 import pandas as pd ~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module> 10 from . import _version, caching 11 from .callbacks import Callback ---> 12 from .core import get_fs_token_paths, open, open_files, open_local 13 from .exceptions import FSTimeoutError 14 from .mapping import FSMap, get_mapper ~/.local/lib/python3.8/site-packages/fsspec/core.py in <module> 16 caches, 17 ) ---> 18 from .compression import compr 19 from .registry import filesystem, get_filesystem_class 20 from .utils import ( ~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module> 68 69 ---> 70 register_compression("zip", unzip, "zip") 71 register_compression("bz2", BZ2File, "bz2") 72 ~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force) 44 45 for ext in extensions: ---> 46 if ext in fsspec.utils.compressions and not force: 47 raise ValueError( 48 "Duplicate compression file extension: %s (%s)" % (ext, name) AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: Jupyter notebook - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3902/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3901/comments
https://api.github.com/repos/huggingface/datasets/issues/3901/events
https://github.com/huggingface/datasets/issues/3901
1,167,339,773
I_kwDODunzps5FlDD9
3,901
Dataset viewer issue for IndicParaphrase- the preview doesn't show
{ "login": "ratishsp", "id": 3006607, "node_id": "MDQ6VXNlcjMwMDY2MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3006607?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ratishsp", "html_url": "https://github.com/ratishsp", "followers_url": "https://api.github.com/users/ratishsp/followers", "following_url": "https://api.github.com/users/ratishsp/following{/other_user}", "gists_url": "https://api.github.com/users/ratishsp/gists{/gist_id}", "starred_url": "https://api.github.com/users/ratishsp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ratishsp/subscriptions", "organizations_url": "https://api.github.com/users/ratishsp/orgs", "repos_url": "https://api.github.com/users/ratishsp/repos", "events_url": "https://api.github.com/users/ratishsp/events{/privacy}", "received_events_url": "https://api.github.com/users/ratishsp/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 à 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n" ]
2022-03-12T16:56:05
2022-04-12T12:10:50
2022-04-12T12:10:49
NONE
null
null
null
## Dataset viewer issue for '*IndicParaphrase*' **Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)* *The preview of the dataset doesn't come up. The error on the console is: Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'* Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3901/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3896/comments
https://api.github.com/repos/huggingface/datasets/issues/3896/events
https://github.com/huggingface/datasets/issues/3896
1,166,628,270
I_kwDODunzps5FiVWu
3,896
Missing google file for `multi_news` dataset
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "reported by @abidlabs ", "related to https://github.com/huggingface/datasets/pull/3843?", "`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.\r\n\r\nWhen loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :)"...
2022-03-11T16:38:10
2022-03-15T12:30:23
2022-03-15T12:30:23
CONTRIBUTOR
null
null
null
## Dataset viewer issue for '*multi_news*' **Link:** https://huggingface.co/datasets/multi_news ``` Server error Status code: 400 Exception: FileNotFoundError Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src ``` Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3896/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3896/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3889/comments
https://api.github.com/repos/huggingface/datasets/issues/3889/events
https://github.com/huggingface/datasets/issues/3889
1,165,456,083
I_kwDODunzps5Fd3LT
3,889
Cannot load beans dataset (Couldn't reach the dataset)
{ "login": "ivsanro1", "id": 30293331, "node_id": "MDQ6VXNlcjMwMjkzMzMx", "avatar_url": "https://avatars.githubusercontent.com/u/30293331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivsanro1", "html_url": "https://github.com/ivsanro1", "followers_url": "https://api.github.com/users/ivsanro1/followers", "following_url": "https://api.github.com/users/ivsanro1/following{/other_user}", "gists_url": "https://api.github.com/users/ivsanro1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ivsanro1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivsanro1/subscriptions", "organizations_url": "https://api.github.com/users/ivsanro1/orgs", "repos_url": "https://api.github.com/users/ivsanro1/repos", "events_url": "https://api.github.com/users/ivsanro1/events{/privacy}", "received_events_url": "https://api.github.com/users/ivsanro1/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :)" ]
2022-03-10T16:34:08
2022-03-15T15:26:47
2022-03-15T15:26:47
NONE
null
null
null
## Describe the bug The beans dataset is unavailable to download. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('beans') ``` ## Expected results The dataset would be downloaded with no issue. ## Actual results ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403) ``` [It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip ) ## Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3889/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3888/comments
https://api.github.com/repos/huggingface/datasets/issues/3888/events
https://github.com/huggingface/datasets/issues/3888
1,165,435,529
I_kwDODunzps5FdyKJ
3,888
IterableDataset columns and feature types
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" }, { "id": 3287...
open
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "htt...
null
[ "#self-assign", "@alvarobartt I've assigned you the issue since I'm not actively working on it.", "Cool thanks @mariosasko I'll try to fix it in the upcoming days, thanks!", "@lhoestq so in order to address what’s not completed in this issue, do you think it makes sense to add a param `features` to `IterableD...
2022-03-10T16:19:12
2022-11-29T11:39:24
null
MEMBER
null
null
null
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None` However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models. Here are a few cases that lead to `features` being `None`: 1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset 2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map` 3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map` Things we can consider, for each point above: 1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features` 2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API) 2.b prefetch the first output value to infer the type 3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user cc @mariosasko @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3888/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3883/comments
https://api.github.com/repos/huggingface/datasets/issues/3883/events
https://github.com/huggingface/datasets/issues/3883
1,164,663,229
I_kwDODunzps5Fa1m9
3,883
The metric Meteor doesn't work for nltk ==3.6.4
{ "login": "zhaowei-wang-nlp", "id": 22047467, "node_id": "MDQ6VXNlcjIyMDQ3NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaowei-wang-nlp", "html_url": "https://github.com/zhaowei-wang-nlp", "followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers", "following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs", "repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos", "events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @zhaowei-wang98, thanks for reporting.\r\n\r\nWe are fixing it... " ]
2022-03-10T02:28:27
2022-03-10T09:03:39
2022-03-10T09:03:39
NONE
null
null
null
## Describe the bug Using the metric Meteor with nltk == 3.6.4 gives a TypeError: TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object ## Steps to reproduce the bug ```python import datasets metric = datasets.load_metric("meteor") predictions = ["hello world"] references = ["hello world"] metric.compute(predictions=predictions, references=references) ``` ## Expected results TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object I think this TypeError exists because input sentences are tokenized into lists of tokens and the str.lower() is applied to this list of tokens. ## Actual results No error but a meteor score ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: linux - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3883/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3881/comments
https://api.github.com/repos/huggingface/datasets/issues/3881/events
https://github.com/huggingface/datasets/issues/3881
1,164,452,005
I_kwDODunzps5FaCCl
3,881
How to use Image folder
{ "login": "INF800", "id": 45640029, "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/INF800", "html_url": "https://github.com/INF800", "followers_url": "https://api.github.com/users/INF800/followers", "following_url": "https://api.github.com/users/INF800/following{/other_user}", "gists_url": "https://api.github.com/users/INF800/gists{/gist_id}", "starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/INF800/subscriptions", "organizations_url": "https://api.github.com/users/INF800/orgs", "repos_url": "https://api.github.com/users/INF800/repos", "events_url": "https://api.github.com/users/INF800/events{/privacy}", "received_events_url": "https://api.github.com/users/INF800/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```", "Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhanc...
2022-03-09T21:18:52
2022-03-11T08:45:52
2022-03-11T08:45:52
NONE
null
null
null
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_33/1648737256.py in <module> ----> 1 load_dataset("imagefolder", data_dir="./my-dataset") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1684 revision=revision, 1685 use_auth_token=use_auth_token, -> 1686 **config_kwargs, 1687 ) 1688 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1511 download_config.use_auth_token = use_auth_token 1512 dataset_module = dataset_module_factory( -> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1514 ) 1515 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1202 ) from None 1203 raise e1 from None 1204 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3881/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3877/comments
https://api.github.com/repos/huggingface/datasets/issues/3877/events
https://github.com/huggingface/datasets/issues/3877
1,164,146,311
I_kwDODunzps5FY3aH
3,877
Align metadata to DCAT/DCAT-AP
{ "login": "EmidioStani", "id": 278367, "node_id": "MDQ6VXNlcjI3ODM2Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/278367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EmidioStani", "html_url": "https://github.com/EmidioStani", "followers_url": "https://api.github.com/users/EmidioStani/followers", "following_url": "https://api.github.com/users/EmidioStani/following{/other_user}", "gists_url": "https://api.github.com/users/EmidioStani/gists{/gist_id}", "starred_url": "https://api.github.com/users/EmidioStani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EmidioStani/subscriptions", "organizations_url": "https://api.github.com/users/EmidioStani/orgs", "repos_url": "https://api.github.com/users/EmidioStani/repos", "events_url": "https://api.github.com/users/EmidioStani/events{/privacy}", "received_events_url": "https://api.github.com/users/EmidioStani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2022-03-09T16:12:25
2022-03-09T16:33:42
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Align to DCAT metadata to describe datasets **Describe the solution you'd like** Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant **Describe alternatives you've considered** **Additional context** DCAT is a W3C standard extended in Europe with DCAT-AP, an example is data.europa.eu publishing datasets metadata in DCAT-AP
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3877/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3872/comments
https://api.github.com/repos/huggingface/datasets/issues/3872/events
https://github.com/huggingface/datasets/issues/3872
1,163,853,026
I_kwDODunzps5FXvzi
3,872
HTTP error 504 Server Error: Gateway Time-out
{ "login": "illiyas-sha", "id": 83509215, "node_id": "MDQ6VXNlcjgzNTA5MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/83509215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/illiyas-sha", "html_url": "https://github.com/illiyas-sha", "followers_url": "https://api.github.com/users/illiyas-sha/followers", "following_url": "https://api.github.com/users/illiyas-sha/following{/other_user}", "gists_url": "https://api.github.com/users/illiyas-sha/gists{/gist_id}", "starred_url": "https://api.github.com/users/illiyas-sha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/illiyas-sha/subscriptions", "organizations_url": "https://api.github.com/users/illiyas-sha/orgs", "repos_url": "https://api.github.com/users/illiyas-sha/repos", "events_url": "https://api.github.com/users/illiyas-sha/events{/privacy}", "received_events_url": "https://api.github.com/users/illiyas-sha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "is pushing directly with git (and git-lfs) an option for you?", "I have installed git-lfs and doing this push with that\r\n", "yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?", "Okay. I didnt saved the dataset to my local machine. So, I processed the ...
2022-03-09T12:03:37
2022-03-15T16:19:50
2022-03-15T16:19:50
NONE
null
null
null
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub api.upload_file( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file raise err File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file r.raise_for_status() File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet ``` Can anyone help me to resolve this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3872/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3869/comments
https://api.github.com/repos/huggingface/datasets/issues/3869/events
https://github.com/huggingface/datasets/issues/3869
1,163,434,800
I_kwDODunzps5FWJsw
3,869
Making the Hub the place for datasets in Portuguese
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many dataset...
2022-03-09T03:06:18
2022-03-09T09:04:09
null
NONE
null
null
null
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth integrating into the Hugging Face hub? Special thanks to @augusnunes for his collaboration on identifying the first ones: - [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). cc @osanseviero
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3869/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3861/comments
https://api.github.com/repos/huggingface/datasets/issues/3861/events
https://github.com/huggingface/datasets/issues/3861
1,162,702,044
I_kwDODunzps5FTWzc
3,861
big_patent cased version
{ "login": "slvcsl", "id": 25265140, "node_id": "MDQ6VXNlcjI1MjY1MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slvcsl", "html_url": "https://github.com/slvcsl", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "repos_url": "https://api.github.com/users/slvcsl/repos", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.\r\n\r\nSee the paper describing the issue here:\r\nhttps://aclanthology.org/2022.gem-1.34/", "Thanks for proposing the addition of the ca...
2022-03-08T14:08:55
2023-04-21T14:32:03
2023-04-21T14:32:03
NONE
null
null
null
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3861/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3859/comments
https://api.github.com/repos/huggingface/datasets/issues/3859/events
https://github.com/huggingface/datasets/issues/3859
1,162,559,333
I_kwDODunzps5FSz9l
3,859
Unable to dowload big_patent (FileNotFoundError)
{ "login": "slvcsl", "id": 25265140, "node_id": "MDQ6VXNlcjI1MjY1MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slvcsl", "html_url": "https://github.com/slvcsl", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "repos_url": "https://api.github.com/users/slvcsl/repos", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODk...
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @slvcsl, thanks for reporting.\r\n\r\nYesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.\r\nhttps://pypi.org/project/datasets/#history\r\n\r\nPlease, feel free to update `datasets` library to the latest version: \r\n```shell\r\npip install -U datasets\r\n```\...
2022-03-08T11:47:12
2022-03-08T13:04:09
2022-03-08T13:04:04
NONE
null
null
null
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent call last) [<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset ----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") 8 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1705 ignore_verifications=ignore_verifications, 1706 try_from_hf_gcs=try_from_hf_gcs, -> 1707 use_auth_token=use_auth_token, 1708 ) 1709 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 if not downloaded_from_gcs: 594 self._download_and_prepare( --> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 596 ) 597 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification [/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 123 split_types = ["train", "val", "test"] 124 extract_paths = dl_manager.extract( --> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types} 126 ) 127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types} [/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc) 282 download_config.extract_compressed_file = True 283 extracted_paths = map_nested( --> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False 285 ) 286 path_or_paths = NestedDataStructure(path_or_paths) [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args) 194 # Singleton first to spare some computation 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 196 return function(data_struct) 197 198 # Reduce logging to keep things readable in multiprocessing with tqdm [/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs) 314 elif is_local_path(url_or_filename): 315 # File, but it doesn't exist. --> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 317 else: 318 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist I have tried this in a number of machines, including on Colab, so I think this is not environment dependent. How do I load the bigPatent dataset?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3859/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3857/comments
https://api.github.com/repos/huggingface/datasets/issues/3857/events
https://github.com/huggingface/datasets/issues/3857
1,162,525,353
I_kwDODunzps5FSrqp
3,857
Order of dataset changes due to glob.glob.
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`...
2022-03-08T11:10:30
2022-03-14T11:08:22
null
CONTRIBUTOR
null
null
null
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken): https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3857/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/3855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3855/comments
https://api.github.com/repos/huggingface/datasets/issues/3855/events
https://github.com/huggingface/datasets/issues/3855
1,162,448,589
I_kwDODunzps5FSY7N
3,855
Bad error message when loading private dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !", "Resolved via https:...
2022-03-08T09:55:17
2022-07-11T15:06:40
2022-07-11T15:06:40
CONTRIBUTOR
null
null
null
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command then fails with: ```bash FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub ``` **even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org. We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO. ## Steps to reproduce the bug E.g. execute the following code to see the different error messages between `transformes` and `datasets`. 1. Transformers ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` The error message is clearer here - it gives: ``` OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` Let's maybe do the same for datasets? The PR was introduced to `transformers` here: https://github.com/huggingface/transformers/pull/15261 ## Expected results Better error message ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3855/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/3854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3854/comments
https://api.github.com/repos/huggingface/datasets/issues/3854/events
https://github.com/huggingface/datasets/issues/3854
1,162,434,199
I_kwDODunzps5FSVaX
3,854
load only England English dataset from common voice english dataset
{ "login": "amanjaiswal777", "id": 36677001, "node_id": "MDQ6VXNlcjM2Njc3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amanjaiswal777", "html_url": "https://github.com/amanjaiswal777", "followers_url": "https://api.github.com/users/amanjaiswal777/followers", "following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}", "gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}", "starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions", "organizations_url": "https://api.github.com/users/amanjaiswal777/orgs", "repos_url": "https://api.github.com/users/amanjaiswal777/repos", "events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}", "received_events_url": "https://api.github.com/users/amanjaiswal777/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r...
2022-03-08T09:40:52
2022-03-09T08:13:33
2022-03-09T08:13:33
NONE
null
null
null
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3854/timeline
null
completed