Database Administrators Asked by Narendra on January 18, 2021
I’m new to morphia. I’m using morphia and mongo-java-driver.jar to talk with replicaset(I need for cluster also) through a java program. I’m new to morphia.
Wrote a sample program code is below
public static void createDBConnection() {
try {
List<ServerAddress> addrs = new ArrayList<ServerAddress>();
addrs.add( new ServerAddress( "192.168.1.80" , 27017 ) );
addrs.add( new ServerAddress( "192.168.1.81" , 27017 ) );
addrs.add( new ServerAddress( "192.168.1.82" , 27017 ) );
MorphiaObject.mongo = new MongoClient(addrs);
ReplicaSetStatus status = MorphiaObject.mongo.getReplicaSetStatus();
List<String> dbs = MorphiaObject.mongo.getDatabaseNames();
MongoOptions mongOptions = MorphiaObject.mongo.getMongoOptions();
MorphiaObject.mongo.setWriteConcern(WriteConcern.REPLICAS_SAFE);
MorphiaObject.mongo.setReadPreference(ReadPreference.secondaryPreferred());
System.out.println("Read prefrence"+ MorphiaObject.mongo.getReadPreference());
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
If I want to write data safely in all nodes what are steps need to do.
What are the minimum configurations that I need to set for replica?
What are validations need to do like replica set is down or alive, is readable, how can we manage if app or program can’t find master?
Thanks in advance.
"It is the best way to ensure data integrity to have MongoDB write to all nodes."
This is entirely wrong. Let's assume you have three members in a replica set. Well, you want the data written to all nodes, so you set your write concern to 3. Now, one of your nodes fails. And while your replica set would work perfectly for reads and writes, with a write concern set to 3, your writes will start to fail. Now you are loosing data, since you can not write any more until all members of your replica set are up, running and resynced.
The write concern REPLICAS_SAFE
usually is enough. However, if you have more than 3 data bearing nodes, you absolutely need (relatively ) fast consistency across all nodes and give a damn about write performance, you might want to create a custom write concern this way
private static final int NODES = 5;
private static final int LOOONG_TIMEOUT = 5000;
int paranoidReplicas = Math.floor((NODES/2)+1);
WriteConcern custom = new WriteConcern(paranoidReplicas, LOOONG_TIMEOUT, true);
"I have to manage the status of the cluster on the application side."
Wrong. All drivers provided by MongoDB are replica set aware. So if your primary fails, your driver will figure out which the new primary is. Transparently.
However, what you could do is to catch according exceptions, check wether there is a primary at the moment (you need the low level driver to do so) and if there isn't one, retry the write after a couple of seconds (the time it takes for electing a new primary - 4 seconds is what I tend to use).
With a high write concern and a read preference, I can do load balancing without the need of a sharded cluster.
Ok, this is a bit of a stretch from my side, but it smells like that's being the case.
Can you do it? Well, with reservations. First of all, you have to understand that a write concern > 1 does not imply that the change is applied on all nodes simultaneously. It just guarantees that the driver will not return until the change is written to the number of specified nodes. However, until the driver returns, the change may be applied to one node, which will happily send the changed data when a query comes in, while it isn't applied to another node, yet, which will send the unchanged data. To make a long story short: do whatever you want, but consistency among the members of a replica set remains eventual. Please see the replica set introduction on read operations for details.
Another problem is that a higher write concern affects your applications performance. When just using the primary for writes, with the data journaled, your modification will be executed rather fast. With a higher write concern, your modifications will take proportionally longer.
A problem related to the last one is that connections will be returned to your connection pool at a slower pace (and might well block threads of your thread pool, in case you use some sort of servlet engine or application server), allowing fewer concurrent users, although this only applies to loads bordering your hardware's (and configuration's) limits.
Is it a good idea? IMHO, in no way. You will get only a marginal read load balancing with eventual consistency and you pay for it with a rather high impact on your write performance and resource efficiency. If your primary can't handle the load, it is under dimensioned. You either need to scale up or out.
Asking this question on Stackoverflow will provide me with enough information to run or use a MongoDB cluster properly.
Erm... ...no. You might get a hint, but a short answer can not replace proper training. Or reading the docs thoroughly, for that matter. See above.
Answered by Markus W Mahlberg on January 18, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP