Category Archives: pribadi

How To Set or Change Timezone on Ubuntu

Instead of using ntpdate refer to pool.ntp.org or using timedatectl with set-timezone param to change timezone properly. Use these powerful command to solve the problem (thanks to the World Wide Web).

~# mv /etc/localtime /etc/localtime.orig

~# ln -sf /usr/share/zoneinfo/Asia/<your_district> /etc/localtime

Tifosilinux uses it on his Sequel Server

This is how to fix the most common issue about high load performance that comes from SQL Server. The sequel version is still 2008 and it should be not different on another higher version. In this case, i will share what i did and how to produced it step by step.
First, we have to check what process really have known as highest. We assumed that the sequel is the highest. So logged into it and execute this query.

SELECT s.session_id,
    r.status,
    r.blocking_session_id 'Blk by',
    r.wait_type,
    wait_resource,
    r.wait_time / (1000 * 60) 'Wait M',
    r.cpu_time,
    r.logical_reads,
    r.reads,
    r.writes,
    r.total_elapsed_time / (1000 * 60) 'Elaps M',
    Substring(st.TEXT,(r.statement_start_offset / 2) + 1,
    ((CASE r.statement_end_offset
WHEN -1
THEN Datalength(st.TEXT)
ELSE r.statement_end_offset
END - r.statement_start_offset) / 2) + 1) AS statement_text,
    Coalesce(Quotename(Db_name(st.dbid)) + N'.' + Quotename(Object_schema_name(st.objectid, st.dbid)) + N'.' +
    Quotename(Object_name(st.objectid, st.dbid)), '') AS command_text,
    r.command,
    s.login_name,
    s.host_name,
    s.program_name,
    s.last_request_end_time,
    s.login_time,
    r.open_transaction_count
FROM sys.dm_exec_sessions AS s
    JOIN sys.dm_exec_requests AS r
ON r.session_id = s.session_id
    CROSS APPLY sys.Dm_exec_sql_text(r.sql_handle) AS st
WHERE r.session_id != @@SPID
ORDER BY r.cpu_time desc;

It will shows you which the highest process comes from.

Then the field program_name (OPEN CURSOR …) shows which TSQL JobStep has produced the highest process as our id to execute these params.

## OPEN CURSOR SQLAgent - TSQL JobStep (Job 0x84FAD6A2C164DF4885327CD1FF48322D : Step 2)

SELECT name FROM msdb.dbo.sysjobs WHERE job_id=CONVERT(uniqueidentifier,0x84FAD6A2C164DF4885327CD1FF48322D)

DB, Table, Query or etc related to the highest process will be showed. We can change identity status related DB as our action.

ALTER DATABASE <mydb> 
    SET READ_ONLY
    WITH NO_WAIT

If you want to make it rollback all open connections you specify the command like this:

    ALTER DATABASE <mydb> 
    SET READ_ONLY
    WITH ROLLBACK IMMEDIATE

last, we used report performance from sequel studio as reference

Image

Ongoing to Understand the Definitive Guide of ..

Every enterprise is powered by data and their application creates data. If they don’t.. that is non sense.

DNS Resolver Keeps Overwriting

Package resolvconf on ubuntu helped me find the solution how to maintain nameserver on our resolv.conf. Since the configuration always keeps overwriting and it has been changed to default NS ( and seems it has been overwritten by the NetworkManager ☹️ )

Yes, this resolvconf pkg could be the powerful solution.

Debugger on ELK

This is my only one article about elasticsearch. We’ve known how powerful ELK is, to dig and cultivate the million rows to get insight then visualize it on kibana. So i presume we already know how to install, configure, and monitor all process running. First, we have to make sure and check if elastic service is listening on port 9200.

curl -XGET "http://localhost:9200/_cat/indices"

And secondly, not sure if my /etc/filebeat/filebeat.yml was correctly setup, according to the concept, and fulfill your needs. But instead of using filebeat.input as inputs on filebeat.yml, i’ve already used it also as inputs on logstash.conf.

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.input:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  scan_frequency: 3
  tail_files: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log

  
#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
   hosts: ["127.0.0.1:5060"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

You see on the output and setup line that kibana service will listen on localhost:5601 and output will be redirect to output.logstash. Then run it with systemctl start filebeat

Next for my /etc/logstash/logstash.conf configuration, i have used these YML ‘Markup Language’ to parse nginx and csv log with pipeline delimiter.

input {
  file {
    path => "/home/datalog/radius.log"
    sincedb_path => "/dev/null"
    start_position => "beginning"
  }
}


filter {

        #This is grok pattern for parse nginx log file. Use with your own needs.
        #if[type] == "log"{
                #grok{
                        #match => {"message" => "%{IPORHOST:remote_ip} - %{DATA:user_name} \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:agent}\""}
                        #remove_field => "message"
                #}
         #}

         csv    {
                        columns => ["DATE", "USER", "STAT", "REG", "MACADDR", "LOCAP", "DESC",'IPADDR']
                        autodetect_column_names => false
                        autogenerate_column_names => true
                        skip_empty_columns => false
                        skip_empty_rows => false
                        separator => '|'
                 }

        date    {
                        match => ["DATE","yyyyMMddHHmmss"]
                        target => "DATE"
                }

}


output {
  elasticsearch { hosts => ["localhost:9200"] index => "your index name goes here" }
  stdout { codec => rubydebug }
}

See this one : stdout { codec => rubydebug }
It will show log verbosity or more talkative when logstash run with these params:

 ./bin/logstash -f /etc/logstash/logstash.conf

Our nginx sample log:

10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET / HTTP/1.1" 200 432 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.23"
10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET /favicon.ico HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET /favicon.ico HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:36:07 -0500] "GET /tes.html HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:37:27 -0500] "GET /tes.html HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"

Our csv sample log:

20191201012649|88420006432@sdfsf|success|WAG-D2-JT|b4:3a:28:cb:d7:28||Sukses Login - blank expDate[20191227211429] timeout=86400|10.10.10.5
20191201012649|9813574921432@erere|success|WAG-D5-KBL|38:71:5b:47:d1:24|SITSIT00196/TLK-WI32170615-0001:@tifosilinux|Sukses Login - SITSIT00196/TLK-WI32170615-0001:@tifosilinux expDate[20191207162126] timeout=86400|10.10.12.34
20191201012649|9812306212345@ytegh|success|WAG-D2-CKA|c4:e3:9f:4d:24:dd|JKTCKG00191/03-01AI-At-ASRAMA3:@tifosilinux|Sukses Login - JKTCKG00191/03-01AI-At-ASRAMA3:@tifosilinux expDate[20191204210918] timeout=86400|10.10.4.15

And here we are

One thing, if you are ready to process and get more insights from more than a hundred million logs. Increase the heap memory size and just let the elasticsearch.requestTimeout and elasticsearch.shardTimeout default value to “0” on kibana.yml configuration.

/etc/elasticsearch/jvm.options

In order to run multiple instances, we need different directories to store data and uuid. Then execute these params on every sessions or screen foreach configuration.

/usr/share/logstash/bin/logstash -f /etc/logstash/conf1.conf --path.data /usr/share/logstash/data1/
/usr/share/logstash/bin/logstash -f /etc/logstash/confN.conf --path.data /usr/share/logstash/dataN/

Least but not last. The most important things in order to avoid redundancy data and protect against data loss during abnormal termination, remove the line of “start_position from the beginning” then setup persistent queues on your logstash.conf. Then we can put the data bulky include automatic set index and name of _doc and for an adjustment field like how to convert String type to date, we can do this param on Dev Tools – Console:

curl -XPUT localhost:9200/<index_name>/ppl/_bulk?pretty --data-binary @<filename>.json -H 'Content-Type: application/json'

PUT /eva
{
  "mappings": {
    "my_type": {
      "properties": {
        "initTime": {
          "type": "date" 
        }
      }
    }
  }
}

############ Additional file ############

$a=0;
while($row2 = mysqli_fetch_array($sql))
{
        $a++;
        $myfile = fopen("/home/tifosilinux/<filename>.json", "a") or die("Unable to open file!");

        $result2 = array(
                'TRX_ID' => '"'.$row2['TRX_ID'].'"',
                'T_ID' => $row2['T_ID'],
                'AMOUNT' => $row2['AMOUNT'],
                'NOTUJUAN' => '"'.$row2['NOTUJUAN'].'"',
                'PRODUCT_CODE' => $row2['PRODUCT_CODE'],
                'BILLER' => $row2['BILLER'],
                'TGL_TRX' => date("Y-m-d\TH:i:s",strtotime($row2['TGL_TRX'])),
                'STATUS_TRX' => $row2['STATUS_TRX'],
                'ERROR_CODE' => $row2['ERROR_CODE'],
                'NM_BILLER' => $row2['NM_BILLER'],
                'NM_MR' => $row2['NM_MR'],
                'JENIS_TRX' => $row2['JENIS_TRX'],
        );
        if($a==1){
                $content = '{ "index" : { "_id" : '.$a.' } }'."\n".json_encode($result2,JSON_NUMERIC_CHECK)."\n";
        }else{
                $content = '{ "index" : { "_id" : '.$a.' } }'."\n".json_encode($result2,JSON_NUMERIC_CHECK)."\n";
        }
        fwrite($myfile, $content);
        fclose($myfile);

}

According to the instructions above, using curl parameters are not always as smooth as running by logstash configuration – because it is handle by java. The issue will be occured by nginx HTTP/1.1 413 Request Entity Too Large depends on how many transactions has been put on indices pattern.

Clues:
to use query search and count from Dev Tools or console dashboard:
GET /<name_index_pattern>/_search
{
  "query": {
    "match": {
      "FIELDNAME": "value"
    }
  }
}

GET /<name_index_pattern>/_count?q=FIELDNAME:value

to use query search and count from curl console linux:
curl -XGET "http://localhost:9200/<name_index_pattern>/_search?pretty" -H 'Content-Type: application/json' -d'{  "query": {    "match": {      "FIELDNAME": "value"    }  }}'

curl -XGET "http://localhost:9200/<name_index_pattern>/_count?q=FIELDNAME:value"
=====================================================================
the default value from query is : 10000 to avoid out of memory error. Although we could increased it
curl -X PUT "localhost:9200/analyze_sample?pretty" -H 'Content-Type: application/json' -d'
{
  "settings" : {
    "index.analyze.max_token_count" : NEWROWVALUE
  }
}


Cheers
-Hary