Instead of using ntpdate refer to pool.ntp.org or using timedatectl with set-timezone param to change timezone properly. Use these powerful command to solve the problem (thanks to the World Wide Web).
This is how to fix the most common issue about high load performance that comes from SQL Server. The sequel version is still 2008 and it should be not different on another higher version. In this case, i will share what i did and how to produced it step by step. First, we have to check what process really have known as highest. We assumed that the sequel is the highest. So logged into it and execute this query.
SELECT s.session_id,
r.status,
r.blocking_session_id 'Blk by',
r.wait_type,
wait_resource,
r.wait_time / (1000 * 60) 'Wait M',
r.cpu_time,
r.logical_reads,
r.reads,
r.writes,
r.total_elapsed_time / (1000 * 60) 'Elaps M',
Substring(st.TEXT,(r.statement_start_offset / 2) + 1,
((CASE r.statement_end_offset
WHEN -1
THEN Datalength(st.TEXT)
ELSE r.statement_end_offset
END - r.statement_start_offset) / 2) + 1) AS statement_text,
Coalesce(Quotename(Db_name(st.dbid)) + N'.' + Quotename(Object_schema_name(st.objectid, st.dbid)) + N'.' +
Quotename(Object_name(st.objectid, st.dbid)), '') AS command_text,
r.command,
s.login_name,
s.host_name,
s.program_name,
s.last_request_end_time,
s.login_time,
r.open_transaction_count
FROM sys.dm_exec_sessions AS s
JOIN sys.dm_exec_requests AS r
ON r.session_id = s.session_id
CROSS APPLY sys.Dm_exec_sql_text(r.sql_handle) AS st
WHERE r.session_id != @@SPID
ORDER BY r.cpu_time desc;
It will shows you which the highest process comes from.
Then the field program_name (OPEN CURSOR …) shows which TSQL JobStep has produced the highest process as our id to execute these params.
## OPEN CURSOR SQLAgent - TSQL JobStep (Job 0x84FAD6A2C164DF4885327CD1FF48322D : Step 2)
SELECT name FROM msdb.dbo.sysjobs WHERE job_id=CONVERT(uniqueidentifier,0x84FAD6A2C164DF4885327CD1FF48322D)
DB, Table, Query or etc related to the highest process will be showed. We can change identity status related DB as our action.
ALTER DATABASE <mydb>
SET READ_ONLY
WITH NO_WAIT
If you want to make it rollback all open connections you specify the command like this:
ALTER DATABASE <mydb>
SET READ_ONLY
WITH ROLLBACK IMMEDIATE
last, we used report performance from sequel studio as reference
Package resolvconf on ubuntu helped me find the solution how to maintain nameserver on our resolv.conf. Since the configuration always keeps overwriting and it has been changed to default NS ( and seems it has been overwritten by the NetworkManager ☹️ )
Yes, this resolvconf pkg could be the powerful solution.
This is my only one article about elasticsearch. We’ve known how powerful ELK is, to dig and cultivate the million rows to get insight then visualize it on kibana. So i presume we already know how to install, configure, and monitor all process running. First, we have to make sure and check if elastic service is listening on port 9200.
curl -XGET "http://localhost:9200/_cat/indices"
And secondly, not sure if my /etc/filebeat/filebeat.yml was correctly setup, according to the concept, and fulfill your needs. But instead of using filebeat.input as inputs on filebeat.yml, i’ve already used it also as inputs on logstash.conf.
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.input:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
scan_frequency: 3
tail_files: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["127.0.0.1:5060"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
You see on the output and setup line that kibana service will listen on localhost:5601 and output will be redirect to output.logstash. Then run it with systemctl start filebeat
Next for my /etc/logstash/logstash.conf configuration, i have used these YML ‘Markup Language’ to parse nginx and csv log with pipeline delimiter.
One thing, if you are ready to process and get more insights from more than a hundred million logs. Increase the heap memory size and just let the elasticsearch.requestTimeout and elasticsearch.shardTimeout default value to “0” on kibana.yml configuration.
/etc/elasticsearch/jvm.options
In order to run multiple instances, we need different directories to store data and uuid. Then execute these params on every sessions or screen foreach configuration.
Least but not last. The most important things in order to avoid redundancy data and protect against data loss during abnormal termination, remove the line of “start_position from the beginning” then setup persistent queues on your logstash.conf. Then we can put the data bulky include automatic set index and name of _doc and for an adjustment field like how to convert String type to date, we can do this param on Dev Tools – Console:
According to the instructions above, using curl parameters are not always as smooth as running by logstash configuration – because it is handle by java. The issue will be occured by nginx HTTP/1.1 413 Request Entity Too Large depends on how many transactions has been put on indices pattern.
Clues:
to use query search and count from Dev Tools or console dashboard:
GET /<name_index_pattern>/_search
{
"query": {
"match": {
"FIELDNAME": "value"
}
}
}
GET /<name_index_pattern>/_count?q=FIELDNAME:value
to use query search and count from curl console linux:
curl -XGET "http://localhost:9200/<name_index_pattern>/_search?pretty" -H 'Content-Type: application/json' -d'{ "query": { "match": { "FIELDNAME": "value" } }}'
curl -XGET "http://localhost:9200/<name_index_pattern>/_count?q=FIELDNAME:value"
=====================================================================
the default value from query is : 10000 to avoid out of memory error. Although we could increased it
curl -X PUT "localhost:9200/analyze_sample?pretty" -H 'Content-Type: application/json' -d'
{
"settings" : {
"index.analyze.max_token_count" : NEWROWVALUE
}
}