Read Large Amount of Data Mongodb Java

This document provides a collection of hard and soft limitations of the MongoDB system.

BSON Document Size

The maximum BSON document size is 16 megabytes.

The maximum certificate size helps ensure that a single certificate cannot use excessive amount of RAM or, during transmission, excessive corporeality of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See mongofiles and the documentation for your commuter for more than information well-nigh GridFS.

Nested Depth for BSON Documents

MongoDB supports no more than 100 levels of nesting for BSON documents.

Database Name Case Sensitivity

Database names are case-sensitive in MongoDB. They also have an additional restriction, case cannot be the only difference between database names.

If the database salesDB already exists MongoDB will return an error if if you attempt to create a database named salesdb.

                                              
mixedCase = db .getSiblingDB ('salesDB')
lowerCase = db .getSiblingDB ('salesdb')
mixedCase .retail .insertOne ( { "widgets": 1, "price": 50 })

The performance succeeds and insertOne() implicitly creates the SalesDB database.

                                              
lowerCase .retail .insertOne ( { "widgets": ane, "cost": 50 })

The operation fails. insertOne() tries to create a salesdb database and is blocked by the naming restriction. Database names must differ on more than merely case.

This operation does not return any results because the database names are case sensitive. There is no mistake considering find() doesn't implicitly create a new database.

Restrictions on Database Names for Windows

For MongoDB deployments running on Windows, database names cannot contain whatsoever of the following characters:

Also database names cannot contain the zip character.

Restrictions on Database Names for Unix and Linux Systems

For MongoDB deployments running on Unix and Linux systems, database names cannot incorporate whatever of the following characters:

Also database names cannot contain the zip graphic symbol.

Length of Database Names

Database names cannot be empty and must accept fewer than 64 characters.

Restriction on Collection Names

Collection names should brainstorm with an underscore or a alphabetic character grapheme, and cannot:

  • incorporate the $.
  • be an empty cord (e.g. "").
  • contain the zilch graphic symbol.
  • brainstorm with the system. prefix. (Reserved for internal employ.)

If your collection proper noun includes special characters, such equally the underscore grapheme, or begins with numbers, then to admission the drove utilize the db.getCollection() method in mongosh or a similar method for your driver.

Namespace Length:

  • For featureCompatibilityVersion gear up to "4.4" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (due east.g. <database>.<collection>),
  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of the collection/view namespace remains 120 bytes.
Restrictions on Field Names
  • Field names cannot incorporate the cypher grapheme.
  • The server permits storage of field names that contain dots (.) and dollar signs ($).
  • MongodB 5.0 adds improved back up for the apply of ($) and (.) in field names. There are some restrictions. See Field Name Considerations for more details.
Restrictions on _id

The field name _id is reserved for utilise every bit a chief cardinal; its value must be unique in the drove, is immutable, and may be of any blazon other than an array. If the _id contains subfields, the subfield names cannot brainstorm with a ($) symbol.

Utilize caution, the issues discussed in this section could atomic number 82 to data loss or abuse.

The MongoDB Query Language is undefined over documents with indistinguishable field names. BSON builders may support creating a BSON document with duplicate field names. While the BSON builder may non throw an fault, inserting these documents into MongoDB is not supported even if the insert succeeds. For example, inserting a BSON document with duplicate field names through a MongoDB driver may outcome in the driver silently dropping the duplicate values prior to insertion.

Starting in MongoDB v.0, certificate field names can be dollar ($) prefixed and can incorporate periods (.). However, mongoimport and mongoexport may not work as expected in some situations with field names that make use of these characters.

MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to have the aforementioned name as type wrappers. Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar ($) prefixed keys. The DBRef machinery is an exception to this full general rule.

There are also restrictions on using mongoimport and mongoexport with periods (.) in field names. Since CSV files use the period (.) to correspond data hierarchies, a period (.) in a field name volition be misinterpreted as a level of nesting.

There is a pocket-sized chance of data loss when using dollar ($) prefixed field names or field names that incorporate periods (.) if these field names are used in conjunction with unacknowledged writes (write concern w=0) on servers that are older than MongoDB v.0.

When running insert, update, and findAndModify commands, drivers that are 5.0 uniform remove restrictions on using documents with field names that are dollar ($) prefixed or that incorporate periods (.). These field names generated a client-side mistake in earlier driver versions.

The restrictions are removed regardless of the server version the commuter is continued to. If a 5.0 driver sends a certificate to an older server, the document will be rejected without sending an error.

Namespace Length
  • For featureCompatibilityVersion set to "4.iv" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a drove or a view, the namespace includes the database name, the dot (.) separator, and the drove/view proper noun (e.g. <database>.<collection>),
  • For featureCompatibilityVersion set to "4.ii" or before, the maximum length of the drove/view namespace remains 120 bytes.
Index Key Limit

For MongoDB ii.vi through MongoDB versions with fCV set to "iv.0" or earlier, the total size of an alphabetize entry, which tin can include structural overhead depending on the BSON type, must be less than 1024 bytes.

When the Index Primal Limit applies:

  • MongoDB will non create an index on a collection if the index entry for an existing document exceeds the index cardinal limit.
  • Reindexing operations will error if the alphabetize entry for an indexed field exceeds the index key limit. Reindexing operations occur as part of the meaty command every bit well as the db.collection.reIndex() method. Because these operations drop all the indexes from a collection and and so recreate them sequentially, the fault from the index fundamental limit prevents these operations from rebuilding whatever remaining indexes for the collection.
  • MongoDB volition not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the alphabetize fundamental limit, and instead, will return an fault. Previous versions of MongoDB would insert just non index such documents.
  • Updates to the indexed field volition error if the updated value causes the index entry to exceed the alphabetize key limit. If an existing certificate contains an indexed field whose alphabetize entry exceeds the limit, whatsoever update that results in the relocation of that certificate on disk will error.
  • mongorestore and mongoimport will non insert documents that contain an indexed field whose respective index entry would exceed the index primal limit.
  • In MongoDB 2.half dozen, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the index key limit on initial sync just will print warnings in the logs. Secondary members also allow alphabetize build and rebuild operations on a collection that contains an indexed field whose corresponding alphabetize entry exceeds the index cardinal limit but with warnings in the logs. With mixed version replica sets where the secondaries are version 2.6 and the master is version ii.four, secondaries volition replicate documents inserted or updated on the ii.4 primary, but volition print error letters in the log if the documents contain an indexed field whose corresponding index entry exceeds the index key limit.
  • For existing sharded collections, clamper migration will fail if the chunk has a document that contains an indexed field whose index entry exceeds the index primal limit.
Number of Indexes per Collection

A single drove can take no more than than 64 indexes.

Index Name Length

Changed in version 4.2

Starting in version four.two, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) ready to "4.2" or greater.

In previous versions of MongoDB or MongoDB versions with fCV set to "four.0" or earlier, fully qualified index names, which include the namespace and the dot separators (i.east. <database proper noun>.<drove proper noun>.$<index proper name>), cannot be longer than 127 bytes.

By default, <alphabetize name> is the chain of the field names and index type. You can explicitly specify the <index name> to the createIndex() method to ensure that the fully qualified index proper name does not exceed the limit.

Number of Indexed Fields in a Compound Index

There can exist no more 32 fields in a compound alphabetize.

Queries cannot utilise both text and Geospatial Indexes

You cannot combine the $text query , which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine $text query with the $near operator.

Fields with 2dsphere Indexes can simply hold Geometries

Fields with 2dsphere indexes must hold geometry data in the course of coordinate pairs or GeoJSON information. If y'all endeavour to insert a document with non-geometry data in a 2dsphere indexed field, or build a 2dsphere index on a collection where the indexed field has not-geometry information, the operation will fail.

Run into also:

The unique indexes limit in Sharding Operational Restrictions.

NaN values returned from Covered Queries past the WiredTiger Storage Engine are always of blazon double

If the value of a field returned from a query that is covered by an index is NaN, the blazon of that NaN value is always double.

Multikey Index

Multikey indexes cannot cover queries over array field(due south).

Geospatial Index

Geospatial indexes cannot cover a query.

Memory Usage in Alphabetize Builds

createIndexes supports building one or more indexes on a collection. createIndexes uses a combination of memory and temporary files on disk to complete index builds. The default limit on retention usage for createIndexes is 200 megabytes (for versions 4.2.3 and after) and 500 (for versions iv.two.ii and earlier), shared betwixt all indexes built using a single createIndexes control. Once the retentiveness limit is reached, createIndexes uses temporary deejay files in a subdirectory named _tmp within the --dbpath directory to complete the build.

You tin can override the memory limit by setting the maxIndexBuildMemoryUsageMegabytes server parameter. Setting a college memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can issue in retentivity exhaustion and server shutdown.

Changed in version 4.ii.

  • For feature compatibility version (fcv) "iv.2", the alphabetize build memory limit applies to all index builds.
  • For feature compatibility version (fcv) "4.0", the index build retentiveness limit only applies to foreground index builds.

Index builds may be initiated either past a user command such as Create Index or by an administrative process such as an initial sync. Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes.

An initial sync operation populates just 1 drove at a time and has no take a chance of exceeding the memory limit. However, it is possible for a user to get-go alphabetize builds on multiple collections in multiple databases simultaneously and potentially swallow an amount of retention greater than the limit gear up in maxIndexBuildMemoryUsageMegabytes.

To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling alphabetize build procedure every bit described on Rolling Alphabetize Builds on Replica Sets.

Collation and Index Types

The following index types only support simple binary comparison and practise not support collation:

  • text indexes,
  • 2d indexes, and
  • geoHaystack indexes.

To create a text, a 2d, or a geoHaystack index on a drove that has a non-simple collation, you lot must explicitly specify {collation: {locale: "simple"} } when creating the index.

Subconscious Indexes
  • You cannot hide the _id index.
  • Y'all cannot use hint() on a subconscious index.
Maximum Number of Documents in a Capped Collection

If you specify a maximum number of documents for a capped collection using the max parameter to create, the limit must exist less than two 32 documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.

Number of Members of a Replica Set

Replica sets can have up to 50 members.

Number of Voting Members of a Replica Fix

Replica sets can accept upwardly to vii voting members. For replica sets with more than than 7 full members, see Non-Voting Members.

Maximum Size of Automobile-Created Oplog

If you do non explicitly specify an oplog size (i.e. with oplogSizeMB or --oplogSize) MongoDB volition create an oplog that is no larger than 50 gigabytes. [ 1 ]

[ane] Starting in MongoDB 4.0, the oplog can grow by its configured size limit to avert deleting the majority commit betoken.

Sharded clusters have the restrictions and thresholds described here.

Operations Unavailable in Sharded Environments

$where does not permit references to the db object from the $where part. This is uncommon in united nations-sharded collections.

The geoSearch control is not supported in sharded environments.

Covered Queries in Sharded Clusters

Starting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a mongos if the index does not contain the shard key, with the following exception for the _id index: If a query on a sharded drove only specifies a condition on the _id field and returns only the _id field, the _id index can encompass the query when run against a mongos fifty-fifty if the _id field is not the shard key.

In previous versions, an index cannot cover a query on a sharded collection when run against a mongos.

Sharding Existing Drove Data Size

An existing collection can merely be sharded if its size does not exceed specific limits. These limits tin be estimated based on the average size of all shard key values, and the configured chunk size.

These limits only employ for the initial sharding operation. Sharded collections can abound to whatsoever size after successfully enabling sharding.

Use the post-obit formulas to summate the theoretical maximum collection size.

                                              
maxSplits = 16777216 ( bytes) / < boilerplate size of shard cardinal values in bytes >
maxCollectionSize (MB) = maxSplits * (chunkSize / 2)

The maximum BSON certificate size is 16MB or 16777216 bytes.

All conversions should use base-2 scale, e.1000. 1024 kilobytes = 1 megabyte.

If maxCollectionSize is less than or nearly equal to the target collection, increase the clamper size to ensure successful initial sharding. If there is doubtfulness as to whether the result of the adding is as well 'close' to the target collection size, it is probable better to increase the chunk size.

Later successful initial sharding, yous can reduce the chunk size every bit needed. If you later on reduce the clamper size, it may take time for all chunks to split to the new size. Run across Modify Clamper Size in a Sharded Cluster for instructions on modifying chunk size.

This tabular array illustrates the approximate maximum drove sizes using the formulas described in a higher place:

Boilerplate Size of Shard Key Values

512 bytes

256 bytes

128 bytes

64 bytes

Maximum Number of Splits

32,768

65,536

131,072

262,144

Max Collection Size (64 MB Chunk Size)

1 TB

2 TB

4 TB

8 TB

Max Collection Size (128 MB Chunk Size)

two TB

4 TB

viii TB

sixteen TB

Max Drove Size (256 MB Clamper Size)

iv TB

8 TB

16 TB

32 TB

Single Document Modification Operations in Sharded Collections

All update and remove() operations for a sharded collection that specify the justOne or multi: fake selection must include the shard key or the _id field in the query specification.

update and remove() operations specifying justOne or multi: false in a sharded collection which do not comprise either the shard key or the _id field render an error.

Unique Indexes in Sharded Collections

MongoDB does not support unique indexes across shards, except when the unique alphabetize contains the full shard key every bit a prefix of the index. In these situations MongoDB will enforce uniqueness beyond the full fundamental, not a unmarried field.

Maximum Number of Documents Per Chunk to Migrate

By default, MongoDB cannot motility a clamper if the number of documents in the chunk is greater than one.iii times the result of dividing the configured chunk size past the average document size. db.drove.stats() includes the avgObjSize field, which represents the average document size in the collection.

For chunks that are too large to drift, starting in MongoDB four.4:

  • A new balancer setting attemptToBalanceJumboChunks allows the balancer to drift chunks as well large to motion as long as the chunks are not labeled colossal. Meet Remainder Chunks that Exceed Size Limit for details.
  • The moveChunk command can specify a new option forceJumbo to allow for the migration of chunks that are as well large to movement. The chunks may or may not be labeled jumbo.
Shard Primal Size

Starting in version iv.4, MongoDB removes the limit on the shard key size.

For MongoDB 4.ii and earlier, a shard cardinal cannot exceed 512 bytes.

Shard Key Index Type

A shard cardinal alphabetize can be an ascending index on the shard key, a compound alphabetize that start with the shard key and specify ascending order for the shard fundamental, or a hashed index.

A shard key index cannot be an alphabetize that specifies a multikey index, a text index or a geospatial alphabetize on the shard key fields.

Shard Key Selection is Immutable in MongoDB 4.ii and Earlier

Your options for changing a shard fundamental depend on the version of MongoDB that you lot are running:

  • Starting in MongoDB 5.0, you lot can reshard a drove past changing a document's shard key.
  • Starting in MongoDB four.iv, yous can refine a shard key by adding a suffix field or fields to the existing shard key.
  • In MongoDB 4.2 and earlier, the choice of shard primal cannot exist changed after sharding.

In MongoDB 4.ii and before, to alter a shard key:

  • Dump all data from MongoDB into an external format.
  • Drop the original sharded collection.
  • Configure sharding using the new shard key.
  • Pre-split up the shard primal range to ensure initial even distribution.
  • Restore the dumped data into MongoDB.
Monotonically Increasing Shard Keys Can Limit Insert Throughput

For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys tin affect insert throughput. If your shard key is the _id field, exist aware that the default values of the _id fields are ObjectIds which have mostly increasing values.

When inserting documents with monotonically increasing shard keys, all inserts vest to the same clamper on a single shard. The system somewhen divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. Nevertheless, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput clogging.

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.

To avoid this constraint, employ a hashed shard key or select a field that does non increment or decrease monotonically.

Hashed shard keys and hashed indexes shop hashes of keys with ascending values.

Sort Operations

If MongoDB cannot use an index or indexes to obtain the sort social club, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the SORT stage reads all input documents before returning whatever output documents, blocking the flow of data for that specific query.

If MongoDB requires using more than 100 megabytes of system retentiveness for the blocking sort operation, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB iv.four). allowDiskUse() allows MongoDB to utilize temporary files on disk to store data exceeding the 100 megabyte system retentiveness limit while processing a blocking sort operation.

Changed in version 4.4: For MongoDB 4.two and prior, blocking sort operations could non exceed 32 megabytes of system memory.

For more than information on sorts and index utilise, come across Sort and Alphabetize Apply.

Aggregation Pipeline Performance

Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an fault. For some pipeline stages you lot can let pipeline processing to take up more than space by using the allowDiskUse option to enable aggregation pipeline stages to write information to temporary files.

The $search aggregation phase is non restricted to 100 megabytes of RAM because information technology runs in a dissever process.

Examples of stages that can spill to deejay when allowDiskUse is true are:

  • $bucket
  • $bucketAuto
  • $group
  • $sort when the sort operation is not supported by an index
  • $sortByCount

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.

Some stages tin't output whatsoever documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. Equally a result, these pipeline stages may crave more infinite than the 100 MB limit.

If the results of one of your $sort pipeline stages exceed the limit, consider adding a $limit stage.

Starting in MongoDB iv.ii, the profiler log letters and diagnostic log letters includes a usedDisk indicator if any aggregation phase wrote information to temporary files due to retentiveness restrictions.

Aggregation and Read Business organization
  • Starting in MongoDB 4.2, the $out stage cannot be used in conjunction with read business concern "linearizable". That is, if you specify "linearizable" read business for db.drove.amass(), you cannot include the $out phase in the pipeline.
  • The $merge phase cannot be used in conjunction with read business organization "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), yous cannot include the $merge phase in the pipeline.
second Geospatial queries cannot use the $or operator
Geospatial Queries

For spherical queries, use the 2dsphere alphabetize result.

The utilize of 2nd index for spherical queries may lead to incorrect results, such as the use of the 2nd index for spherical queries that wrap around the poles.

Geospatial Coordinates
  • Valid longitude values are between -180 and 180, both inclusive.
  • Valid latitude values are betwixt -ninety and 90, both inclusive.
Area of GeoJSON Polygons

For $geoIntersects or $geoWithin, if yous specify a single-ringed polygon that has an area greater than a unmarried hemisphere, include the custom MongoDB coordinate reference organization in the $geometry expression; otherwise, $geoIntersects or $geoWithin queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects or $geoWithin queries for the complementary geometry.

Multi-document Transactions

For multi-document transactions:

  • You lot tin specify read/write (CRUD) operations on existing collections. For a list of Crud operations, see Crud Operations.
  • Starting in MongoDB 4.4, you can create collections and indexes in transactions. For details, run into Create Collections and Indexes In a Transaction
  • The collections used in a transaction can be in different databases.

    Y'all cannot create new collections in cantankerous-shard write transactions. For example, if yous write to an existing collection in one shard and implicitly create a collection in a dissimilar shard, MongoDB cannot perform both operations in the same transaction.

  • You lot cannot write to capped collections. (Starting in MongoDB 4.2)
  • You cannot use read concern "snapshot" when reading from a capped collection. (Starting in MongoDB 5.0)
  • You cannot read/write to collections in the config, admin, or local databases.
  • Yous cannot write to organisation.* collections.
  • Y'all cannot return the supported functioning'south query plan (i.e. explicate).
  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.
  • For cursors created in a transaction, you cannot call getMore outside the transaction.
  • Starting in MongoDB 4.2, yous cannot specify killCursors equally the outset operation in a transaction.

Changed in version 4.4.

The following operations are not allowed in transactions:

  • Operations that affect the database itemize, such as creating or dropping a drove or an index when using MongoDB 4.2 or lower. Starting in MongoDB 4.iv, yous can create collections and indexes in transactions unless the transaction is a cross-shard write transaction. For details, see Create Collections and Indexes In a Transaction.
  • Creating new collections in cross-shard write transactions. For example, if yous write to an existing drove in ane shard and implicitly create a drove in a different shard, MongoDB cannot perform both operations in the same transaction.
  • Explicit cosmos of collections, e.chiliad. db.createCollection() method, and indexes, e.g. db.collection.createIndexes() and db.collection.createIndex() methods, when using a read concern level other than "local".
  • The listCollections and listIndexes commands and their helper methods.
  • Other not-CRUD and non-advisory operations, such as createUser, getParameter, count, etc. and their helpers.

Transactions take a lifetime limit equally specified past transactionLifetimeLimitSeconds. The default is 60 seconds.

Write Command Batch Limit Size

100,000 writes are allowed in a single batch operation, defined by a single request to the server.

Inverse in version three.6: The limit raises from 1,000 to 100,000 writes. This limit as well applies to legacy OP_INSERT messages.

The Bulk() operations in mongosh and comparable methods in the drivers do non have this limit.

Views

The view definition pipeline cannot include the $out or the $merge phase. If the view definition includes nested pipeline (e.g. the view definition includes $lookup or $facet stage), this restriction applies to the nested pipelines too.

Views have the following operation restrictions:

  • Views are read-only.
  • You lot cannot rename views.
  • notice() operations on views do not support the following projection operators:
    • $
    • $elemMatch
    • $slice
    • $meta
  • Views do not support text search.
  • Views practise not back up map-reduce operations.
  • Views practice not support geoNear operations (i.e. $geoNear pipeline phase).
Project Restrictions

New in version 4.4:

$-Prefixed Field Path Restriction
Starting in MongoDB iv.four, the discover() and findAndModify() projection cannot project a field that starts with $ with the exception of the DBRef fields. For example, starting in MongoDB 4.four, the following performance is invalid:
                                                      
db .inventory .detect ( { } , { "$instock.warehouse": 0, "$particular": 0, "detail.$price": 1 } ) // Invalid starting in iv.4
In earlier version, MongoDB ignores the $-prefixed field projections.
$ Positional Operator Placement Restriction
Starting in MongoDB 4.4, the $ projection operator can but appear at the end of the field path; e.yard. "field.$" or "fieldA.fieldB.$". For example, starting in MongoDB 4.4, the post-obit operation is invalid:
                                                      
db .inventory .find ( { } , { "instock.$.qty": ane } ) // Invalid starting in 4.4
To resolve, remove the component of the field path that follows the $ projection operator. In previous versions, MongoDB ignores the part of the path that follows the $; i.e. the projection is treated as "instock.$".
Empty Field Name Projection Restriction
Starting in MongoDB four.4, discover() and findAndModify() projection cannot include a project of an empty field name. For example, starting in MongoDB four.iv, the following operation is invalid:
                                                      
db .inventory .find ( { } , { "": 0 } ) // Invalid starting in iv.four
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.
Path Collision: Embedded Documents and Its Fields
Starting in MongoDB 4.four, information technology is illegal to project an embedded document with any of the embedded document's fields. For case, consider a collection inventory with documents that contain a size field:
                                                      
{ ... , size: { h: x, w: 15.25, uom: "cm" } , ... }
Starting in MongoDB four.four, the post-obit functioning fails with a Path standoff mistake because it attempts to project both size document and the size.uom field:
                                                      
db .inventory .notice ( { } , { size: 1, "size.uom": 1 } ) // Invalid starting in iv.four
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
  • If the projection of the embedded document comes afterward whatsoever and all projections of its fields, MongoDB projects the embedded document. For example, the project document { "size.uom": ane, size: 1 } produces the same effect as the projection document { size: one }.
  • If the projection of the embedded certificate comes earlier the projection any of its fields, MongoDB projects the specified field or fields. For example, the project document { "size.uom": 1, size: ane, "size.h": one } produces the same result every bit the project certificate { "size.uom": 1, "size.h": 1 }.
Path Collision: $slice of an Array and Embedded Fields
Starting in MongoDB 4.4, find() and findAndModify() projection cannot comprise both a $slice of an array and a field embedded in the array. For example, consider a collection inventory that contains an array field instock:
                                                      
{ ... , instock: [ { warehouse: "A", qty: 35 } , { warehouse: "B", qty: 15 } , { warehouse: "C", qty: 35 } ] , ... }
Starting in MongoDB iv.iv, the following operation fails with a Path collision fault:
                                                      
db .inventory .find ( { } , { "instock": { $piece: 1 } , "instock.warehouse": 0 } ) // Invalid starting in 4.4
In previous versions, the projection applies both projections and returns the kickoff element ($slice: one) in the instock assortment but suppresses the warehouse field in the projected element. Starting in MongoDB four.four, to achieve the same result, use the db.collection.aggregate() method with ii carve up $projection stages.
$ Positional Operator and $piece Restriction
Starting in MongoDB 4.4, observe() and findAndModify() projection cannot include $slice projection expression every bit office of a $ projection expression. For example, starting in MongoDB iv.four, the following functioning is invalid:
                                                      
db .inventory .discover ( { "instock.qty": { $gt: 25 } } , { "instock.$": { $piece: 1 } } ) // Invalid starting in iv.4
In previous versions, MongoDB returns the first element (instock.$) in the instock array that matches the query condition; i.due east. the positional project "instock.$" takes precedence and the $slice:1 is a no-op. The "instock.$": { $slice: 1 } does not exclude whatever other document field.
Sessions and $external Username Limit

To use Client Sessions and Causal Consistency Guarantees with $external authentication users (Kerberos, LDAP, or x.509 users), the usernames cannot be greater than 10k bytes.

Session Idle Timeout

Sessions that receive no read or write operations for 30 minutes or that are non refreshed using refreshSessions within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills whatsoever in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.

Consider an application that bug a db.drove.find(). The server returns a cursor along with a batch of documents defined by the cursor.batchSize() of the find(). The session refreshes each time the awarding requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and airtight. When the application requests the next batch of documents, the server returns an error every bit the cursor was killed when the session was closed.

For operations that return a cursor, if the cursor may exist idle for longer than 30 minutes, consequence the operation inside an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. For example:

                                          
var session = db.getMongo().startSession()
var sessionId = session.getSessionId().id
var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Date() // take notation of time at operation start
while (cursor.hasNext()) {
// Check if more than than v minutes take passed since the last refresh
if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
print("refreshing session")
db.adminCommand({"refreshSessions" : [sessionId]})
refreshTimestamp = new Date()
}
// process cursor normally
}

In the instance operation, the db.drove.find() method is associated with an explicit session. The cursor is configured with noCursorTimeout() to foreclose the server from closing the cursor if idle. The while loop includes a block that uses refreshSessions to refresh the session every 5 minutes. Since the session will never exceed the thirty minute idle timeout, the cursor tin can remain open indefinitely.

For MongoDB drivers, defer to the commuter documentation for instructions and syntax for creating sessions.

moskowitzsorpathey83.blogspot.com

Source: https://docs.mongodb.com/manual/reference/limits/

0 Response to "Read Large Amount of Data Mongodb Java"

Enregistrer un commentaire

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel