Index Only Scan 由 Visiblility Map 状态决定,Visibility Map 会由 postgreSQL 自动触发维护,未使用 Index Only Scan 原因应该是过多的 Visiblity Map 太多 page 不是visible 状态导致的。而尚未到达触发 auto vacuum 的阈值。可手动运行一下 vacuum
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1241.14..1241.15 rows=1 width=8) (actual time=5.454..5.454 rows=1 loops=1)
-> Index Only Scan using member_query on site_user m (cost=0.42..1183.06 rows=23236 width=0) (actual time=0.018..4.265 rows=25829 loops=1)
Index Cond: (company_id = 'efbf0bbb-f02c-44cc-a04e-de6b36921435'::bpchar)
Heap Fetches: 0
Planning Time: 0.105 ms
Execution Time: 5.473 ms
(6 rows)
createindex user_login_date on user_login_record (group_id, user_id, machine_id, login_date desc);
比较结果
max
1 2 3 4 5 6
explain (analyze, buffers, costs) selectmax(login_date) from user_login_record wheregroup_id='312bb069-fd27-4822-885d-c3ac67bfd8a1' and user_id='952bd155-06b3-4792-82ec-4b86d06c86a7' and machine_id='C635F8F44F1960778CE58869DF10150D';
1 2 3 4 5 6 7 8 9 10 11
QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2.84..2.85 rows=1 width=8) (actual time=0.143..0.143 rows=1 loops=1) Buffers: shared hit=72 -> Index Only Scan using user_login_date on user_login_record (cost=0.41..2.84 rows=1 width=8) (actual time=0.050..0.126 rows=98 loops=1) Index Cond: ((group_id = '312bb069-fd27-4822-885d-c3ac67bfd8a1'::bpchar) AND (user_id = '952bd155-06b3-4792-82ec-4b86d06c86a7'::bpchar) AND (machine_id = 'C635F8F44F1960778CE58869DF10150D'::bpchar)) Heap Fetches: 98 Buffers: shared hit=72 Planning Time: 0.139 ms Execution Time: 0.163 ms (8 rows)
order by … desc limit 1
1 2 3 4 5 6 7 8
explain (analyze, buffers, costs) select login_date from user_login_record wheregroup_id='312bb069-fd27-4822-885d-c3ac67bfd8a1' and user_id='952bd155-06b3-4792-82ec-4b86d06c86a7' and machine_id='C635F8F44F1960778CE58869DF10150D' orderby login_date desc limit1
1 2 3 4 5 6 7 8 9 10 11
QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.41..2.84 rows=1 width=8) (actual time=0.042..0.042 rows=1 loops=1) Buffers: shared hit=4 -> Index Only Scan using user_login_date on user_login_record (cost=0.41..2.84 rows=1 width=8) (actual time=0.041..0.041 rows=1 loops=1) Index Cond: ((group_id = '312bb069-fd27-4822-885d-c3ac67bfd8a1'::bpchar) AND (user_id = '952bd155-06b3-4792-82ec-4b86d06c86a7'::bpchar) AND (machine_id = 'C635F8F44F1960778CE58869DF10150D'::bpchar)) Heap Fetches: 1 Buffers: shared hit=4 Planning Time: 0.124 ms Execution Time: 0.057 ms (8 rows)
结论
两个都利用了索引,但利用方式有所不同,max没有利用login_date字段的排序,最后对符合条件的98条数据进行了筛选(rows=98 loops=1);order by limit 1的sql则依旧利用了索引中的 login_date 字段,只取了一套数据(rows=1 loops=1)。 虽然这次max没多一次循环,但由于多了多条记录,相比另外一个sql性能降低了很多。
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:798)
at me.chanjar.weixin.common.api.WxMessageInMemoryDuplicateChecker.checkBackgroundProcessStarted(WxMessageInMemoryDuplicateChecker.java:81)
at me.chanjar.weixin.common.api.WxMessageInMemoryDuplicateChecker.isDuplicate(WxMessageInMemoryDuplicateChecker.java:89)
at me.chanjar.weixin.mp.api.WxMpMessageRouter.isMsgDuplicated(WxMpMessageRouter.java:257)
at me.chanjar.weixin.mp.api.WxMpMessageRouter.route(WxMpMessageRouter.java:172)
at me.chanjar.weixin.open.api.impl.WxOpenMessageRouter.route(WxOpenMessageRouter.java:24)
at me.chanjar.weixin.open.api.impl.WxOpenMessageRouter.route(WxOpenMessageRouter.java:20)
at cn.devmgr.mall.wechatopen.WechatNotifyController.callback(WechatNotifyController.java:269)
at jdk.internal.reflect.GeneratedMethodAccessor433.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
... ...
Here is the list of approaches, sorted from quickest / most reliable to slowest / hardest:
1. If you have the source of the class, create an exception in the constructor (without actually throwing it). You can simply examine or print it when you need to know when the thread was created.
2. If you don't have the sources, the thread name can be a good hint who created it.
3. If the name hints to a generic service (like java.util.Timer), then you can create a conditional breakpoint in your IDE in the constructor. The condition should be the thread name; the debugger will then stop when someone creates a thread with this name.
4. If you don't have too many threads, set a breakpoint in the constructors of Thread.
5. If you have many threads, attach a debugger to the app and freeze it. Then examine the stack traces.
6. If everything else fails, get the source code for the Java runtime and add logging code in the classes you want to observe, compile a new rt.jar and replace the original one with your version. Don't try this in production, please.
7. If money isn't an issue, you can use dynamic tracing tools like Compuware APM or, if you're on Linux or Solaris, you can try SystemTap and dtrace, respectively.
@GetMapping("/threads") public Map<String, ?> getThreadStackTraces() { logger.trace("getThreadStackTraces"); Map<String, List<Map<String, Object>>> result = new HashMap<>(); Map<String, Map<String, Object>> threadInfoByKey = new HashMap<>(); for (Thread t : Thread.getAllStackTraces().keySet()) { StackTraceElement[] elements = t.getStackTrace(); List<Map<String, Object>> stackList = new ArrayList<>(); StringBuilder builder = new StringBuilder(); int i = 0; for (StackTraceElement ele : elements) { i++; Map<String, Object> map = new HashMap<>(); map.put("className", ele.getClassName()); map.put("methodName", ele.getMethodName()); map.put("lineNum", ele.getLineNumber()); map.put("fileName", ele.getFileName()); map.put("module", ele.getModuleName()); stackList.add(map);
if (i < 30) { builder.append(ele.getClassName()); builder.append(ele.getMethodName()); } } String status = t.getState().toString().toLowerCase(); String key = builder.toString(); Map<String, Object> threadInfo = threadInfoByKey.get(key); if (threadInfo == null) { threadInfo = new HashMap<>(); threadInfo.put("stack", stackList); threadInfoByKey.put(key, threadInfo); List<Map<String, Object>> list = result.get(status); if (list == null) { list = new ArrayList<>(); result.put(status, list); } list.add(threadInfo); } if (threadInfo.containsKey("counter")) { threadInfo.put("counter", (Integer) threadInfo.get("counter") + 1); } else { threadInfo.put("counter", 1); } if (threadInfo.containsKey("names")) { List<String> names = (List<String>) threadInfo.get("names"); names.add(t.getName()); } else { List<String> names = new ArrayList<>(); names.add(t.getName()); threadInfo.put("names", names); } } return result; }
org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.InvalidClassException: com.package-of-pojo.Xxxx; local class incompatible: stream classdesc serialVersionUID = -2364286648166609117, local class serialVersionUID = -8974455668551700477
at org.springframework.data.redis.serializer.JdkSerializationRedisSerializer.deserialize(JdkSerializationRedisSerializer.java:84)
at org.springframework.data.redis.serializer.DefaultRedisElementReader.read(DefaultRedisElementReader.java:48)
at org.springframework.data.redis.serializer.RedisSerializationContext$SerializationPair.read(RedisSerializationContext.java:272)
at org.springframework.data.redis.cache.RedisCache.deserializeCacheValue(RedisCache.java:260)
at org.springframework.data.redis.cache.RedisCache.lookup(RedisCache.java:94)
at org.springframework.cache.support.AbstractValueAdaptingCache.get(AbstractValueAdaptingCache.java:58)
at org.springframework.cache.interceptor.AbstractCacheInvoker.doGet(AbstractCacheInvoker.java:73)
at org.springframework.cache.interceptor.CacheAspectSupport.findInCaches(CacheAspectSupport.java:554)
at org.springframework.cache.interceptor.CacheAspectSupport.findCachedItem(CacheAspectSupport.java:519)
at org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:401)
at org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:345)
at org.springframework.cache.interceptor.CacheInterceptor.invoke(CacheInterceptor.java:61)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
cd /var/lib/postgrewsql/12 rm -r main pg_basebackup -h 192.168.1.11 -p 5432 -U postgres -Fp -Xs -Pv -R -D ./main chown -R postgres:postgres main
如果pg_basebackup命令成功,会出现类似提示
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/4000028 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_8071"
32514/32514 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/4000100
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: base backup completed
不能把javaFX的启动类直接配置给shade插件,否则打包后的文件会运行不起来。出现类似 Error: JavaFX runtime components are missing, and are required to run this application 这样的错误提示。要解决它,需要做一个新的,不使用javaFX的启动类
Graphics Device initialization failed for : es2, sw Error initializing QuantumRenderer: no suitable pipeline found java.lang.RuntimeException: java.lang.RuntimeException: Error initializing QuantumRenderer: no suitable pipeline found at javafx.graphics/com.sun.javafx.tk.quantum.QuantumRenderer.getInstance(QuantumRenderer.java:280) at javafx.graphics/com.sun.javafx.tk.quantum.QuantumToolkit.init(QuantumToolkit.java:244) at javafx.graphics/com.sun.javafx.tk.Toolkit.getToolkit(Toolkit.java:261) at javafx.graphics/com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:267) at javafx.graphics/com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:158)