Insert data to Geode Cluster Exception - exception

I do data write data test on Geode cluster,insert data ,it throw exception.But this problem did not appear in the stand-alone model.
Caused by: java.lang.AssertionError: Version stamp should have a member at this point for entry VersionedThinDiskLRURegionEntryHeapObjectKey#4b77fc81 (key=CaTaskListObjKey [TASK_ID=227823562808, ORG_NO=32401]; rawValue=REMOVED_PHASE1; version={v0; rv0; ds=0; time=0};member=null)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.Oplog.create(Oplog.java:3434)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.PersistentOplogSet.create(PersistentOplogSet.java:181)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.DiskStoreImpl.put(DiskStoreImpl.java:719)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.DiskRegion.put(DiskRegion.java:338)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeBytesToDisk(DiskEntry.java:826)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.entries.DiskEntry$Helper.basicUpdate(DiskEntry.java:948)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.entries.DiskEntry$Helper.update(DiskEntry.java:860)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.entries.AbstractDiskRegionEntry.setValue(AbstractDiskRegionEntry.java:40)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.entries.AbstractRegionEntry.setValueWithTombstoneCheck(AbstractRegionEntry.java:306)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.EntryEventImpl.setNewValueInRegion(EntryEventImpl.java:1710)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.EntryEventImpl.putNewEntry(EntryEventImpl.java:1614)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.RegionMapPut.createEntry(RegionMapPut.java:420)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.RegionMapPut.createOrUpdateEntry(RegionMapPut.java:244)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutAndDeliverEvent(AbstractRegionMapPut.java:297)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWithIndexUpdatingInProgress(AbstractRegionMapPut.java:305)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutIfPreconditionsSatisified(AbstractRegionMapPut.java:293)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnSynchronizedRegionEntry(AbstractRegionMapPut.java:279)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnRegionEntryInMap(AbstractRegionMapPut.java:270)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.addRegionEntryToMapAndDoPut(AbstractRegionMapPut.java:248)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutRetryingIfNeeded(AbstractRegionMapPut.java:213)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doWithIndexInUpdateMode(AbstractRegionMapPut.java:195)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:177)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:150)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:167)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2100)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.BucketRegion.virtualPut(BucketRegion.java:527)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.PartitionedRegionDataStore.putLocally(PartitionedRegionDataStore.java:1194)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.PartitionedRegionDataStore.putLocally(PartitionedRegionDataStore.java:1177)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.PartitionedRegionDataView.putEntryOnRemote(PartitionedRegionDataView.java:99)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.partitioned.PutAllPRMessage.doLocalPutAll(PutAllPRMessage.java:470)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.partitioned.PutAllPRMessage.operateOnPartitionedRegion(PutAllPRMessage.java:324)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:325)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:367)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:432)
at Remote Member '20.46.163.166(server166:13782):41000' in java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at Remote Member '20.46.163.166(server166:13782):41000' in java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:949)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.distributed.internal.ClusterDistributionManager.doPartitionRegionThread(ClusterDistributionManager.java:851)
at Remote Member '20.46.163.166(server166:13782):41000' in org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
at Remote Member '20.46.163.166(server166:13782):41000' in java.lang.Thread.run(Thread.java:745)
at org.apache.geode.distributed.internal.ReplyException.handleCause(ReplyException.java:90)
at org.apache.geode.internal.cache.partitioned.PartitionMessage$PartitionResponse.waitForCacheException(PartitionMessage.java:832)
at org.apache.geode.internal.cache.partitioned.PutAllPRMessage$PutAllResponse.waitForResult(PutAllPRMessage.java:845)
at org.apache.geode.internal.cache.PartitionedRegion.tryToSendOnePutAllMessage(PartitionedRegion.java:2679)
at org.apache.geode.internal.cache.PartitionedRegion.sendMsgByBucket(PartitionedRegion.java:2454)
at org.apache.geode.internal.cache.PartitionedRegion.postPutAllSend(PartitionedRegion.java:2225)
at org.apache.geode.internal.cache.LocalRegionDataView.postPutAll(LocalRegionDataView.java:326)
at org.apache.geode.internal.cache.LocalRegion.basicPutAll(LocalRegion.java:9698)
at org.apache.geode.internal.cache.LocalRegion.basicBridgePutAll(LocalRegion.java:9367)
at org.apache.geode.internal.cache.tier.sockets.command.PutAll80.cmdExecute(PutAll80.java:270)
at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:178)
at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:844)
at org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:74)
at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:594)
at org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
at java.lang.Thread.run(Thread.java:745)

Related

Unable to install grails plugin mongodb

I am using grails-2.0.4 version with GGTS ide. When i tried to install mongodb some errors occuring. I have tried in two ways.
**1)compile ":mongodb:1.0.0.GA"**
Then it is getting this error:
| Loading Grails 2.0.4
| Configuring classpath
:: problems summary ::
:::: ERRORS
Server access Error: Connection timed out: connect url=http://plugins.grails.org/grails-mongodb/tags/RELEASE_1_0_0_GA/mongodb-1.0.0.GA.pom
Server access Error: Connection timed out: connect url=http://plugins.grails.org/grails-mongodb/tags/RELEASE_1_0_0_GA/grails-mongodb-1.0.0.GA.jar
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/plugins//mongodb/1.0.0.GA/mongodb-1.0.0.GA.pom
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/plugins//mongodb/1.0.0.GA/mongodb-1.0.0.GA.jar
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/core//mongodb/1.0.0.GA/mongodb-1.0.0.GA.pom
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/core//mongodb/1.0.0.GA/mongodb-1.0.0.GA.jar
Server access Error: Connection timed out: connect url=http://svn.codehaus.org/grails/trunk/grails-plugins/grails-mongodb/tags/RELEASE_1_0_0_GA/mongodb-1.0.0.GA.pom
Server access Error: Connection timed out: connect url=http://svn.codehaus.org/grails/trunk/grails-plugins/grails-mongodb/tags/RELEASE_1_0_0_GA/grails-mongodb-1.0.0.GA.jar
Server access Error: Connection timed out: connect url=http://repo1.maven.org/maven2//mongodb/1.0.0.GA/mongodb-1.0.0.GA.pom
Server access Error: Connection timed out: connect url=http://repo1.maven.org/maven2//mongodb/1.0.0.GA/mongodb-1.0.0.GA.jar
| Error Failed to resolve dependencies (Set log level to 'warn' in BuildConfig.groovy for more information):
:mongodb:1.0.0.GA
**2) grails install-plugin C:\Documents and Settings\marao18\Desktop\gorm-mongodb-0.5.4**
Then it is getting this error: Please help me anyone.
| Loading Grails 2.0.4
| Configuring classpath.
| Environment set to development.....
| Resolving plugin C:\Documents. Please wait...
:: problems summary ::
:::: ERRORS
Server access Error: Connection timed out: connect url=http://plugins.grails.org/grails-%5CDocuments/tags/RELEASE_and/%5CDocuments-and.pom
Server access Error: Connection timed out: connect url=http://plugins.grails.org/grails-%5CDocuments/tags/RELEASE_and/grails-%5CDocuments-and.zip
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/plugins/C/%5CDocuments/and/%5CDocuments-and.pom
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/plugins/C/%5CDocuments/and/%5CDocuments-and.zip
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/core/C/%5CDocuments/and/%5CDocuments-and.pom
Server access Error: Connection timed out: connect url=http://repo.grails.org/grails/core/C/%5CDocuments/and/%5CDocuments-and.zip
Server access Error: Connection timed out: connect url=http://svn.codehaus.org/grails/trunk/grails-plugins/grails-%5CDocuments/tags/RELEASE_and/%5CDocuments-and.pom
Server access Error: Connection timed out: connect url=http://svn.codehaus.org/grails/trunk/grails-plugins/grails-%5CDocuments/tags/RELEASE_and/grails-%5CDocuments-and.zip
Server access Error: Connection timed out: connect url=http://repo1.maven.org/maven2/C/%5CDocuments/and/%5CDocuments-and.pom
Server access Error: Connection timed out: connect url=http://repo1.maven.org/maven2/C/%5CDocuments/and/%5CDocuments-and.zip
| Error resolving plugin [name:\Documents, group:C, version:and]. Plugin not found.
| Error Plugin not found for name [C:\Documents] and version [and]

How to fix LocalDb in windows7

I created an application using Localdb that works perfectly on Windows 8 and 10(Just with Sqllocaldb installation), but it does not work on Windows 7. I installed all versions of Sqllocaldb on Windows 7, but it also has the following error:
a network-related or instance-specific error occurred while establishing a >connection to sql server. the server was not found or was not accessible. >verify that the instance name is correct and that sql server is configured to >allow remote connections. (provider: sql network interfaces, error: 26 - >Error Locating Server/Instance Specified)
connection string:
Data Source=(LocalDB)\V12.0;AttachDbFilename=" + Application.StartupPath + #"\db\IndexProject.mdf;Integrated Security=True;Connect Timeout=30"
please help

Error while taking the backup using pg_dump

We are using postgresql 9.2 version. While taking the backup of the database using pg_dump we are facing below error.
Please guide me how to rectify this issue.
pg_dump: [archiver (db)] query was: COPY public.aclappliedtopep (id, instance_version, direction, aclname, ifname, owningentityid, protocolendpoint_id, deploypending, authentityid, authentityclass, accesscontrollist_id) TO stdout;
pg_dump: FATAL: terminating connection due to administrator command
pg_dump: [archiver (db)] query failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_dump: [archiver (db)] query was: COPY public.aclappliedtopep (id, instance_version, direction, aclname, ifname, owningentityid, protocolendpoint_id, deploypending, authentityid, authentityclass, accesscontrollist_id) TO stdout;
pg_dump: [archiver (db)] connection to database "qovr" failed: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5433?
pg_dump: error reading large object 69417845: FATAL: terminating connection due to administrator command
FATAL: terminating connection due to administrator command
pg_dump: could not open large object 59087743: FATAL: terminating connection due to administrator command
FATAL: terminating connection due to administrator command

Xcode 10 seems to break com.apple.commcenter.coretelephony.xpc

I have upgraded to Xcode 10 on High Sierra and now the Rewarded Ad example project from AdMob complains that com.apple.commcenter.coretelephony.xpc is not working correctly.
Is there a new entitlement that I have to enable? I am searching for hours without any clue.
UPDATE:
This only happens from the simulator. On the device it works fine. They must have added a new restriction.
2018-09-22 10:59:39.730813+0100 RewardedVideoExample[1449:26168] libMobileGestalt MobileGestalt.c:890: MGIsDeviceOneOfType is not supported on this platform.
2018-09-22 10:59:40.031746+0100 RewardedVideoExample[1449:26281] Failed to create remote object proxy: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.031865+0100 RewardedVideoExample[1449:26261] Failed to ping server after delegate was set
2018-09-22 10:59:40.031938+0100 RewardedVideoExample[1449:26262] Failed to create synchronous remote object proxy: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.032054+0100 RewardedVideoExample[1449:26262] [NetworkInfo] Descriptors query returned error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.032353+0100 RewardedVideoExample[1449:26262] Failed to create synchronous remote object proxy: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.032451+0100 RewardedVideoExample[1449:26262] [NetworkInfo] Descriptors query returned error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.035631+0100 RewardedVideoExample[1449:26262] Failed to create synchronous remote object proxy: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.035714+0100 RewardedVideoExample[1449:26262] [NetworkInfo] Descriptors query returned error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated.}
2018-09-22 10:59:40.259658+0100 RewardedVideoExample[1449:26314] WF: === Starting WebFilter logging for process RewardedVideoExample
2018-09-22 10:59:40.259805+0100 RewardedVideoExample[1449:26314] WF: _userSettingsForUser : (null)
2018-09-22 10:59:40.259876+0100 RewardedVideoExample[1449:26314] WF: _WebFilterIsActive returning: NO
2018-09-22 10:59:41.020170+0100 RewardedVideoExample[1449:26282] <Google> Cannot find an ad network adapter with the name(s): com.google.DummyAdapter. Remember to link all required ad network adapters and SDKs, and set -ObjC in the 'Other Linker Flags' setting of your build target.
Reward based video ad failed to load: No ad returned from any ad server.
2018-09-22 11:00:09.288227+0100 RewardedVideoExample[1449:26168] [MC] System group container for systemgroup.com.apple.configurationprofiles path is /Users/houmie/Library/Developer/CoreSimulator/Devices/3FF81C00-0DA2-4F98-8964-A84F14FB14A6/data/Containers/Shared/SystemGroup/systemgroup.com.apple.configurationprofiles
2018-09-22 11:00:09.289859+0100 RewardedVideoExample[1449:26168] [MC] Reading from private effective user settings.
Running this in Terminal made it go away:
xcrun simctl spawn booted log config --mode "level:off" --subsystem com.apple.CoreTelephony
I found a workaround restarting the simulator.
For those who are experiencing this issue in real devices, linking CoreTelephony.framework to the project fixes the problem.
It does not fix it for the simulator, though.

Unable to determine ZooKeeper ensemble

Status:
Zookeeper is running
Hbase master also running fine and waiting for any regionserver
Now when I start the region server I recieve the following error:
17/04/24 20:13:23 ERROR master.HMaster: Region server icosa4,60020,1493045002304 reported a fatal error:
ABORTING region server icosa4,60020,1493045002304: Unhandled exception: Unable to determine ZooKeeper ensemble
Cause:
java.io.IOException: Unable to determine ZooKeeper ensemble
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:116)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:153)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:127)
at org.apache.hadoop.hbase.replication.ReplicationPeer.reloadZkWatcher(ReplicationPeer.java:170)
at org.apache.hadoop.hbase.replication.ReplicationPeer.<init>(ReplicationPeer.java:69)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.getPeer(ReplicationZookeeper.java:343)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectToPeer(ReplicationZookeeper.java:308)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectExistingPeers(ReplicationZookeeper.java:189)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.<init>(ReplicationZookeeper.java:156)
at org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:105)
at org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:4035)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:4004)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1416)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1100)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:753)
at java.lang.Thread.run(Thread.java:745)
17/04/24 20:13:23 INFO zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [icosa4,60020,1493045002304]
On starting region server the following error is logged:
17/04/24 20:42:20 INFO replication.ReplicationZookeeper: Replication is now started
17/04/24 20:42:20 INFO zookeeper.RecoverableZooKeeper: Node /hbase/replication/state already exists and this is not a retry
17/04/24 20:42:20 WARN zookeeper.ZKConfig: java.net.UnknownHostException: PBUF
5icosa4: unknown error
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at java.net.InetAddress.getByName(InetAddress.java:1076)
at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:201)
at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:245)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:127)
at org.apache.hadoop.hbase.replication.ReplicationPeer.reloadZkWatcher(ReplicationPeer.java:170)
at org.apache.hadoop.hbase.replication.ReplicationPeer.<init>(ReplicationPeer.java:69)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.getPeer(ReplicationZookeeper.java:343)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectToPeer(ReplicationZookeeper.java:308)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectExistingPeers(ReplicationZookeeper.java:189)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.<init>(ReplicationZookeeper.java:156)
at org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:105)
at org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:4035)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:4004)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1416)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1100)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:753)
at java.lang.Thread.run(Thread.java:745)
17/04/24 20:42:20 ERROR zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg
17/04/24 20:42:20 WARN regionserver.HRegionServer: Exception in region server :
java.io.IOException: Unable to determine ZooKeeper ensemble
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:116)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:153)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:127)
at org.apache.hadoop.hbase.replication.ReplicationPeer.reloadZkWatcher(ReplicationPeer.java:170)
at org.apache.hadoop.hbase.replication.ReplicationPeer.<init>(ReplicationPeer.java:69)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.getPeer(ReplicationZookeeper.java:343)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectToPeer(ReplicationZookeeper.java:308)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.connectExistingPeers(ReplicationZookeeper.java:189)
at org.apache.hadoop.hbase.replication.ReplicationZookeeper.<init>(ReplicationZookeeper.java:156)
at org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:105)
at org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:4035)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:4004)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1416)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1100)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:753)
at java.lang.Thread.run(Thread.java:745)
17/04/24 20:42:20 INFO regionserver.HRegionServer: STOPPED: Failed initialization
17/04/24 20:42:20 ERROR regionserver.HRegionServer: Failed init
I solved it by adding this to conf/hbase-site.xml :
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>

Resources