A Fluent Performance by Fluentd

We’re not going to compare two popular open-source log management tools that can be used to reduce the burden of managing logs like how more powerful fluentd (written in CRuby) than logstash (written in JRuby), or even splunk, etc. But in this section, we should have an idea that there is always an alternative technology or method to achieve any goal in any case.

In case the difference between fluentd and logstash, i always be the one who think there is no differentiator that states which one of them is better than the other part. Even if the fact that logstash consumes more memory compared to fluentd or fluentd is the leader in the number of plugins available compared to the logstash or bla…bla. or bla…bla..bla.

Just take it or leave it. Take it if we are building a complex solution then the fluentd make us more convenient with it, or still using the other with consideration that the algorithmic statements on ‘route events’ make you look like so programmer 🙂 and it is good for procedural programmers. Both fluentd and logstash are suitable for certain requirements.

But the most important and best point is both can co-exist in same environment and can be used for specific use cases, for monolithic applications and microservices. Something just like we could use the shell script combined with php, or shell script with java for more powerful tools. Indeed, ELK-EFK hybrid could be the way to get the best out of both.

Command Control Tools

Jika kita familiar dengan tools management software dan mengikuti perkembangan linux dari awal, maka tools keluaran CSM (Computer Science and Mathematics) seperti C3 (Cluster Command and Control) akan menjadi salah satu alternatif untuk meningkatkan skalabilitas sistem dalam hal mereduksi waktu dan usaha untuk mengoperasikan dan mengelola sebuah cluster. Salah satu implementasinya adalah saat kita bekerja dengan superkomputer (lihat: https://tifosilinux.wordpress.com/2015/03/23/update-superkomputer-dengan-native-gnu-linux-finale/ ).

Selain C3, untuk teknologi yang lebih baru sekarang, ansible bisa menjadi alternatif lain. Pengaturan pada playbook dan inventory wajib kita pahami saat menggunakan ansible. Tentu semua ini kita gunakan sesuai dengan kebutuhan.

cexec & cget illustration
# Ansible parameter

apt-add-repository --yes --update ppa:ansible/ansible
apt update
apt-get install ansible
ansible --version

>> cat /etc/ansible/hosts

## db-[99:101]-node.example.com
[SERVER2]
172.17.60.12

[SERVER3]
172.17.60.13

# Pastikan sudah autologin dari ansible ke masing-masing node/ cluster (sebenarnya hal ini berlaku juga untuk C3) ATAU gunakan parameter -K untuk opsi menggunakan password
>> ansible-playbook -u serversatu lemp.yml --become -K
playbook, inventory & node illustration
check connectivity
install remotely
Deployed Remotely with playbook

Capture diatas merupakan sedikit gambaran dari deployment ansible

Source Data Analysis Django

Berikut ini saya sertakan link source code untuk aplikasi Web Data Analysis dengan Framework Django. Namun sebelumnya pastikan untuk konsep, penjelasan, dan lain-lain saya asumsikan anda sudah memiliki ebook .pdf nya dari sini:

https://play.google.com/store/books/details?id=tsSvDwAAQBAJ

Ini link nya:

https://github.com/linuxusmile/bigh.git

Tifosilinux uses it on his Sequel Server

This is how to fix the most common issue about high load performance that comes from SQL Server. The sequel version is still 2008 and it should be not different on another higher version. In this case, i will share what i did and how to produced it step by step.
First, we have to check what process really have known as highest. We assumed that the sequel is the highest. So logged into it and execute this query.

SELECT s.session_id,
    r.status,
    r.blocking_session_id 'Blk by',
    r.wait_type,
    wait_resource,
    r.wait_time / (1000 * 60) 'Wait M',
    r.cpu_time,
    r.logical_reads,
    r.reads,
    r.writes,
    r.total_elapsed_time / (1000 * 60) 'Elaps M',
    Substring(st.TEXT,(r.statement_start_offset / 2) + 1,
    ((CASE r.statement_end_offset
WHEN -1
THEN Datalength(st.TEXT)
ELSE r.statement_end_offset
END - r.statement_start_offset) / 2) + 1) AS statement_text,
    Coalesce(Quotename(Db_name(st.dbid)) + N'.' + Quotename(Object_schema_name(st.objectid, st.dbid)) + N'.' +
    Quotename(Object_name(st.objectid, st.dbid)), '') AS command_text,
    r.command,
    s.login_name,
    s.host_name,
    s.program_name,
    s.last_request_end_time,
    s.login_time,
    r.open_transaction_count
FROM sys.dm_exec_sessions AS s
    JOIN sys.dm_exec_requests AS r
ON r.session_id = s.session_id
    CROSS APPLY sys.Dm_exec_sql_text(r.sql_handle) AS st
WHERE r.session_id != @@SPID
ORDER BY r.cpu_time desc;

It will shows you which the highest process comes from.

Then the field program_name (OPEN CURSOR …) shows which TSQL JobStep has produced the highest process as our id to execute these params.

## OPEN CURSOR SQLAgent - TSQL JobStep (Job 0x84FAD6A2C164DF4885327CD1FF48322D : Step 2)

SELECT name FROM msdb.dbo.sysjobs WHERE job_id=CONVERT(uniqueidentifier,0x84FAD6A2C164DF4885327CD1FF48322D)

DB, Table, Query or etc related to the highest process will be showed. We can change identity status related DB as our action.

ALTER DATABASE <mydb> 
    SET READ_ONLY
    WITH NO_WAIT

If you want to make it rollback all open connections you specify the command like this:

    ALTER DATABASE <mydb> 
    SET READ_ONLY
    WITH ROLLBACK IMMEDIATE

last, we used report performance from sequel studio as reference

Image

Ongoing to Understand the Definitive Guide of ..

Every enterprise is powered by data and their application creates data. If they don’t.. that is non sense.