Most developers know by now that JavaScript Object Notation (JSON) is the best way to work with data sets in JavaScript. Yesterday, I blogged about “When To Use Pagination in REST Resources” and advocated for better UX Design instead of pagination. This generated another question: “Just how big is TOO BIG for JSON?
In the context of a desktop browser, there are many factors to answer this question that include file, memory and object size as well as transport and loading time. I wanted find the “sweet spot” for speaking to the maximum usable size of a JSON object. So I decided to test desktop browsers to see how they handle JSON at different sizes. At some point, I would also like to see how this translates to mobile devices.
The JSON samples were pulled from customer data in sizes ranging from 1 record to 1,000,000 records. Each record averages around 200 bytes a piece. The core HTML file only loads the JSON file and sets it to the testData variable.
The following is an example JSON object used in this test:
1 2 3 4 5 6 7 8 9 10 | var testData = [ { "ACCTOUNT_NUMBER":"1234567890", "CUSTOMER_NAME":"ACME Products and Services, Inc.", "ADDRESS":"123 Main Street", "CITY":"Albuquerque", "STATE":"NM", "ZIP":"87101-1234" } ] |
Five (5) samples per browser were taken to create the averages. Originally ten (10) samples per browser were taken, but it quickly became obvious that the data wasn’t changing enough to make the extra effort necessary. The results are using the operating system’s process monitors and each browser’s developer tools. Since network traffic can be deceiving, I chose to load the JSON file from the local file system and provide estimated download times from Numion’s Download Time Calculator at http://www.numion.com/calculators/time.html. Memory usage was taken after the object loaded, while the browser was at rest. Each sample was taken from a fresh process. Memory was taken from each browsers “tab” or renderer process, with the exception of Firefox which was taken from the single core process. These tests were run on my 2011 2.2GHz i7 MacBook Pro with 8GB of RAM.
During the test, I spent some time seeing how usable each browser was while loading the object. I wanted to see if it was possible to load large objects while still keeping the browser usable and how the browser performed after the object was loaded. Most of the “During Load” results are untestable as I am not quick enough to tell if the browser is responsive in 200ms.
The following are the results from the tests:
Records | Download Time* | Chrome (OS X) | Firefox (OS X) | Safari (OS X) | IE9 (Win 7) |
---|---|---|---|---|---|
1,000,000 (153.37MB) | 19:51 | DOM Ready: 16.4s RAM: 1.09GB Loading: browser hang Loaded: usable, 1 tab crash | DOM Ready: 14.29s RAM: 1.82GB Loading: untestable Loaded: usable | DOM Ready: 5.13s RAM: 1.76GB Loading: browser hang Loaded: usable | Data failed to load and browser became unresponsive. |
750,000 (115.13MB) | 14:54 | DOM Ready: 12.24s RAM: 727.3MB Loading: browser hang Loaded: usable | DOM Ready: 13.8s RAM: 1.48GB Loading: browser hang Loaded: usable | DOM Ready: 3.82s RAM: 1.39GB Loading: barely usable Loaded: usable | Data failed to load and browser became unresponsive. |
500,000 (76.69MB) | 9:56 | DOM Ready: 9.13s RAM: 512.0MB Loading: browser hang Loaded: usable | DOM Ready: 12.19s RAM: 1.14G Loading: browser hang Loaded: usable | DOM Ready: 2.49s RAM: 1.02GB Loading: usable Loaded: usable | Data failed to load and browser became unresponsive. |
250,000 (38.44MB) | 4:58 | DOM Ready: 2.98s RAM: 289.8MB Loading: barely usable Loaded: usable | DOM Ready: 3.09s RAM: 659.3MB Loading: barely usable Loaded: usable | DOM Ready: 1.57s RAM: 550.0MB Loading: usable Loaded: usable | DOM Ready: 4.88s RAM: 537.1MB Loading: browser hang Loaded: usable |
100,000 (15.5MB) | 2:00 | DOM Ready: 1.24s RAM: 150.4MB Loading: usable Loaded: usable | DOM Ready: 1.62s RAM: 424.0MB Loading: usable Loaded: usable | DOM Ready: 463ms RAM: 296.2MB Loading: untestable Loaded: usable | DOM Ready: 1.71s RAM: 210.6MB Loading: untestable Loaded: usable |
50,000 (7.77MB) | 1:00 | DOM Ready: 521ms RAM: 89.7MB Loading: untestable Loaded: usable | DOM Ready: 1.08s RAM: 308.9MB Loading: untestable Loaded: usable | DOM Ready: 243ms RAM: 192.8MB Loading: untestable Loaded: usable | DOM Ready: 801ms RAM: 110.1MB Loading: untestable Loaded: usable |
25,000 (3.87MB) | 00:30 | DOM Ready: 255ms RAM: 67.5MB Loading: untestable Loaded: usable | DOM Ready: 646ms RAM: 269.3MB Loading: untestable Loaded: usable | DOM Ready: 169ms RAM: 130.8MB Loading: untestable Loaded: usable | DOM Ready: 408ms RAM: 51.1MB Loading: untestable Loaded: usable |
10,000 (1.55MB) | 0:12 | DOM Ready: 144ms RAM: 53.1MB Loading: untestable Loaded: usable | DOM Ready: 193ms RAM: 234.7MB Loading: untestable Loaded: usable | DOM Ready: 66ms RAM: 105.1MB Loading: untestable Loaded: usable | DOM Ready: 117ms RAM: 35.1MB Loading: untestable Loaded: usable |
5,000 (796.57KB) | 0:06 | DOM Ready: 81ms RAM: 39.14MB Loading: untestable Loaded: usable | DOM Ready: 123ms RAM: 220.9MB Loading: untestable Loaded: usable | DOM Ready: 42ms RAM: 77.2 Loading: untestable Loaded: usable | DOM Ready: 151ms RAM: 18.2MB Loading: untestable Loaded: usable |
1,000 (159.59KB) | 0:01 | 68ms RAM: 29.1MB Loading: untestable Loaded: usable | 76ms RAM: 209.2MB Loading: untestable Loaded: usable | 41ms RAM: 72.0 Loading: untestable Loaded: usable | 63ms RAM: 11.3MB Loading: untestable Loaded: usable |
500 (80.01KB) | 0:00 | DOM Ready: 61ms RAM: 32.5MB Loading: untestable Loaded: usable | DOM Ready: 59ms RAM: 209.2MB Loading: untestable Loaded: usable | DOM Ready: 32ms RAM: 71.6MB Loading: untestable Loaded: usable | DOM Ready: 54ms RAM: 8.6MB Loading: untestable Loaded: usable |
250 (40.05KB) | 0:00 | DOM Ready: 55ms RAM: 28.7MB Loading: untestable Loaded: usable | DOM Ready: 53ms RAM: 209.1MB Loading: untestable Loaded: usable | DOM Ready: 33ms RAM: 71.4MB Loading: untestable Loaded: usable | DOM Ready: 62ms RAM: 7.1MB Loading: untestable Loaded: usable |
100 (15.97KB) | 0:00 | DOM Ready: 53ms RAM: 28.8MB Loading: untestable Loaded: usable | DOM Ready: 66ms RAM: 209.1MB Loading: untestable Loaded: usable | DOM Ready: 33ms RAM: 70.1MB Loading: untestable Loaded: usable | DOM Ready: 39ms RAM: 6.6MB Loading: untestable Loaded: usable |
50 (8.05KB) | 0:00 | DOM Ready: 40ms RAM: 27.9MB Loading: untestable Loaded: usable | DOM Ready: 68ms RAM: 209.1MB Loading: untestable Loaded: usable | DOM Ready: 36ms RAM: 71.1MB Loading: untestable Loaded: usable | DOM Ready: 35ms RAM: 5.7MB Loading: untestable Loaded: usable |
25 (4.07KB) | 0:00 | DOM Ready: 12ms RAM: 28.6MB Loading: untestable Loaded: usable | DOM Ready: 77ms RAM: 209.1MB Loading: untestable Loaded: usable | DOM Ready: 29ms RAM: 71MB During Load: untestable Loaded: usable | DOM Ready: 29ms RAM: 3.5MB Loading: untestable Loaded: usable |
1 (192B) | 0:00 | DOM Ready: 7ms RAM: 28.2MB Loading: untestable Loaded: usable | DOM Ready: 59ms RAM: 207MB Loading: untestable Loaded: usable | DOM Ready: 35ms RAM: 64.0MB Loading: untestable Loaded: usable | DOM Ready: 19ms RAM: 3.1MB Loading: untestable Loaded: usable |
* Download time is estimated using Numion’s Download Time Calculator, based on 1.544Mbps connection in (mm:ss) format.
As you can see from my results, the browsers were all surprisingly close to one another. The most interesting part for me was that Internet Explorer failed to load the object after the 250,000 record test. I couldn’t find anything pointing to a size limitation here. If anyone knows as to why IE failed to load the 500,000 record object, let me know.
From this test, I am considering the sweet spot to be around 10,000 records at (1.55MB). The maximum number of usable records I would push to a browser would be around 25,000 records (3.87MB). Keep in mind there are numerous factors to keep in mind when determining how many records you should return to your JavaScript application. The purpose of this test was to help identify a general maximum number for conversations around large record sets with JSON.
thanks! that was useful info.
In my tests IE9 fail with JSON responses above 40-45MB
It might have something to do with the MaxJsonLength property.
Why is browser using 50MB RAM to load 15MB file?
100,000 (15.5MB) 2:00 DOM Ready: 1.24s
RAM: 150.4MB
Loading: usable
Loaded: usable DOM Ready: 1.62s
RAM: 424.0MB
Loading: usable
Loaded: usable DOM Ready: 463ms
RAM: 296.2MB
Loading: untestable
Loaded: usable DOM Ready: 1.71s
RAM: 210.6MB
Loading: untestable
Loaded: usable
This difference includes general application overhead. That memory is not specific to the loading of the 15MB file, but all browser processes.
The JSON file (which is essentially just text) is 15MB, but the browser has to use javascript to parse it all into data and then run queries against. Depending on how the javascript is written, there can be a lot of variables and arrays generated in the back-end sitting in RAM (eg: a lot of JSON data will use arrays for sub-sets of data, eg: like a person with address array data.) these arrays can get huge if they’re created and then used to write the html rather then just writing the html line by line directly.
It was really a good article that I have been looking for. The problem is we don’t need to send such a huge object to a browser, unless the user initially requested it.
Most applicable application for sending large JSON objects to browser is blog posts (Dynamically loaded to the DOM) which doesn’t necessary means we have to pull 100 long posts at once.
You can easily destroy the object after manipulating the DOM. This was the browser’s memory will become free for the next batch.
Thanks
Thank you for the article. I was wondering if there are any limits on the size of the text in JSON value pairs? For example, if we have a JSON object for “Games” and one of the values is “Game rules” that has long text, is there a limit of the size of that text?
Example:
“game” : {
“game_name”: “Some Game”,
“publisher”: “Some publisher”,
“game_rules”: “very long text here….”;
“game_status” : “in development”
}
JavaScript has no formal standard maxlength for a String value. This maximum is determined by each individual implementation. I have read that IE’s maximum String size is 2^31 and that Mozilla supports closer to 2^15. But the real question is how long will it take to parse that large amount of data and is it worth it to stick with a data-driven JSON approach? It seems to me that this data might be best returned to the client application in pages or consumable chunks versus instead. It might even make sense to return this as HTML or HTML fragments depending on the implementation.
If I have this single json object:
“test : {
“variable”: “a very long long string”
}
How much overhead do you think in serialization/deserialization/download over http if i keep increasing that long string? I’m trying to see if there is performance issue if I put a super big string as a value
There is no formal string length maximum in JavaScript. So this is left to each individual implementation. However, in theory, this can be very large, upwards of hundreds of megabytes. For Example, Chrome can support up to 512MB for an allocation request. Likely, most browsers will run out of memory before being able to allocate this much to a single member. In a more practical view, “super big” and “very long long” are relative and impossible to determine if it will cause an issue in your case. How large are the strings you are planning to work with?
Thanks!
Have you tried doing the same test recently?
And in the new Microsoft browser Edge/Spartan?
No, unfortunately I haven’t had a chance to re-run this test. Good call though and after I finish up the project I am on, I will try and find some time to do that.
Thanks for the information, very useful. You say 10,000 records is your sweet spot. However, in looking at the download time of 12 seconds, I am wondering how you can say that’s a good sweet spot. Doesn’t that seem to be unreasonably long to download data? Or maybe I am missing something? Thanks
In perspective to many of the business web applications I work on, this 12 second “sweet spot” is very acceptable. I wouldn’t recommend this “sweet spot” for a web site or shopping experience and would expect this to be much lower there.
I planned to try a big json data (100MB) in plain javacript, and most of my website visitor will be using dial up connection. What method would be useful to help it work faster. async ??? Plese reply. Thank you.
I’m not sure that much of anything will help user’s retrieve a 100MB file over dial-up. In theory, asynchronous communication could technically slow down the response in this scenario if other async resources were loading. If my customers were using dial-up or any slower data connections, I would also assume that their devices don’t have a lot of resources or modern support. I would stick with classic server-side rendering and work to make the data as small as possible.
Thank you for your quick response. So, think I should leave this idea behind, and will go for server side.
Thanks, very useful post 🙂
Thanks for giving this information. Also I have tested this on node.js, I thought that it was the limitation on V8 only, but after reading your blog, it seems that all browsers had this limitation. So I assume that all browser’s javascript engine including V8 on node.js cannot handle too large json & object.
Test: https://git.io/vrYmr
Josh, have you had a chance to re-run that test in IE11?
Thanks, I was wondering if I can use it for 1000 records or not!
I have a 200 GB JSON file, is there any possibility can be processed from an Application.
I don’t know anything about the Application you are speaking of. Assuming it is a web app and you wanted to process the 200 GB JSON file client-side, there are a variety of reasons this may not be a good practice. Even if the browser could handle this amount of data, which I highly doubt even modern browsers could, the first challenge would be the time it would take for the user to download that file. They certainly wouldn’t be processing this in real time. Another challenge would be data allowance concerns. It is becoming increasingly common for user’s to be on limited data connections. I would suggest running this data through a database or some other tool and returning only the necessary fragments of data to the user. Again, I have no context to your application here, but in general this would not be advisable.
Interesting article. Do you think a transformation like this might make it better in general?
So pass the same data as multiple large rows of primitives instead of one large array of objects:
var testData =
{
“ACCTOUNT_NUMBERS”: [“1234567890”,…],
“CUSTOMER_NAMES”: [“ACME Products and Services, Inc.”,…],
“ADDRESSES”: [“123 Main Street”,…],
“CITIES”:[“Albuquerque”,…],
“STATES”:[“NM”,…],
“ZIPS”:[“87101-1234”,…]
}
That is a great thought and could certainly reduce the overall size of the object. It would be interesting to see the performance in that model. The challenging part would be maintaining the position of all of the arrays so that the individual records are maintained.
Also worth pointing out, just in terms of transfer time/size using a data structure like that, if your application is already using GZIP compression from the server, you wouldn’t get much if any additional size reduction of the payload.
Hi,
We maintain long json file containing all the datasets iteration logic and it gets checked into source code by multiple developers.
Maintaining and validating this big json file becomes time consuming.
We opted to put all angular and html transformation into json file, so that during runtime the file can be parsed seamlessly to get UI rendered with the given filterations.
Can you provide any thoughts on separating json file into multiple and storing it or Is there any best way to maintain the big json file?
Large JSON files can be nearly impossible to manage manually as a flat file. I would suggest storing the data in another storage and management tool that exposes an API to serve the data or to export a static data file. This should simplify maintenance. I would also recommend trying to find ways to make the data smaller for your clients to download if possible. If users don’t need all of the data you are sending, an API that can return only the necessary data would be best.
Hello, thank you for this measuring. Could you please tell me, if you know whether exists the same measuring for Android applications. I haven’t found in the Internet info about the largest JSON object they could receive and the time they need to process it.
I haven’t done this evaluation on Android. I do plan on doing this again soon and plan to include mobile browsers and platforms as well.
Hi Josh,
I came across this excellent article while researching an issue one of my colleagues is having with a 50K+ record set. You mention in your last comment (2018) that you were planning to do this again, presumably with modern browsers (Chrome, Firefox, Edge, Safari). Are you still planning to do that? I would very much be interested to see that data.
Thanks again. Very informative.
Great post! It was the #1 search result on Google for “correct way to structure json from large dataset.” I was wondering the most efficient way to build an object from a 1000+ records taken from an HTML table in Quickbase. I created a Chrome Extension that builds a single object from a paginated dataset with 200 records per page. Because of the CORS configuration, I’m sending the object in a post request to a proxy server where I built a NodeJS app that uses the Quickbase API to create a new table in a different Quickbase realm. I was debating sending a Json object from each page of Quickbase to the Node app, but I would rather pass the data through once instead of storing it on the proxy. I feel much more confident with the single object option now after reading your blog. Thanks again.
Great Article! Thank you
Great and useful job, thanks.
To parse large JSON files, I use http://json-csv-xls.com. This is a bit off topic, but sometimes it helps.
Thank you very much for this article.
it really helped me decide whether to do client or server-side pagination.
Thank you very much for your time and the efforts
Thank you for article.
I worried that my json (600k) is too big for browser.
You helped me to calm down.
Really interesting how it going be on phones. From another side, modern phones can be more powerful than laptops.
Great post really informative.