2022年7月31日星期日

Engineer Manager



面试

https://www.managersclub.com/popular-interview-questions-for-engineering-managers/

https://igotanoffer.com/blogs/tech/engineering-manager-interviews#system-design

https://www.interviewkickstart.com/interview-questions/software-engineering-manager-interview-questions


工作提升


https://newsletter.pragmaticengineer.com/p/templates-as-inspiration-for-engineering Resources for Engineering Managers and Software Engineers

https://blog.pragmaticengineer.com/things-that-helped-me-successfully-move-into-engineering-management/ 

以公司为先,team次之,team member随后,个人最后

时间管理


http://www.engineeringladders.com/





https://www.quora.com/Does-a-software-manager-have-less-work-than-developers/answer/Jim-Plush

  • Resource allocation - identifying upcoming projects and what resources can shift onto them
  • Project prioritization - could be most important things or opportunity based things depending on who’s freeing up from projects/dealing with vacations, or other unexpected time off
  • Interviewing and recruiting - a large part of a mangers job is recruiting and interviewing to help strengthen the team. This also includes salary and overall package negotiations
  • Team Coach - scheduling and actively attending 1:1’s with team members to keep a pulse of the team and make sure people are happy with where their careers are and provide constructive criticism when needed.
  • Team chef - ensuring during crunch times or in the case of a large, sustained outage the team is nourished
  • PagerDuty scheduling - making sure there’s adequate coverage across the various parts of their services for production issues that may, no will, happen overnight.
  • Technical oversight - making sure people are working towards shipping quality code vs research projects. Providing a sounding board for technical disagreements.
  • Buffering the team from the chaos of upper level management who want to do a new feature every other day
  • Managing up - providing roadmaps, status reports, production health dashboards to let executive teams understand where the team is
  • Reviews and Promotions - a lot of time is spent in reviews and making sure people are getting promoted when they should, salary negotiation, perk negotiation, etc…
  • Communication - as a manger you are the communication hub to relay what’s going on above you to your team and what’s happening with your team to your boss and above. Teams crave direction and understanding of where the company or org is headed, long term plans, short term plans, etc…
  • Setting up or at least overseeing that you have the right processes in place for bug triage, code review workflow, ticketing/project management flows, roadmap/resource allocation methodologies.
  • Understanding when to let someone go - Managers need to be able to design performance improvement plans and in times of failure, remove people from the team that may be underperforming or creating issues amongst the team. It’s like a band-aid, it should be done promptly so both parties can move on with their lives.
  • Meeting fighter - making sure your team members aren’t spending all their days in meetings and actively working to either schedule meetings around clusters of times or trying to remove them entirely in favor of other mediums of communication.
  • Customer liaison - working directly with customers for feedback or issues related to outages or expectations. Sometimes being brought into prospective sales to highlight technical details of your specific part of the organization.
  • CSO - Chief Swag Officer, ensuring shirts,jackets,stickers,hoodies are delivered to everyone on the team, helping to budget for those items if the team expresses interest
  • CLO - Chief Learning Officer, making sure your team has access to learning and educational budget and materials. Working with execs to establish budgets for conferences or other career advancement opportunities.
  • CBO - Chief Budgeting Officer, making sure you have budgets for the resources you need for the upcoming quarter or year including headcount, software licenses, education material, swag, conferences, recruiting advertisement opportunities, food, snacks, gift card bonuses for small rewards, travel budget for remote teams.
  • CCO - Chief Culture Officer, making sure that cultural norms are being upheld and helping to foster a positive culture. e.g. is there something happening in the org that isn’t to the standards or ideals. Backdoor promotions, salary imbalances, curbing naysayers, watching for burnout. A big part of culture is who gets hired, fired and promoted.

Questions


Do we need EM if we have Product Manager / Tech Lead? The question is like why we need CEO? CEO is the person truly understand the business, who can find the right people to work on the right thing and balance resources. It is not purely product and technology, people / culture / personal well-being are also important. CEO is thinking about how to make business excellent and sustainable, not only for now, but for long term.

Conflict with teams / people: team conflict, customer obsession, how to achieve goal 

Why we need 1:1? Private talks, overall happiness, concern. Ultimate goal is to empower people to feel safe -> be themselves -> diversified opinion -> good for company





2021年11月11日星期四

Kubernetes

How to expose service

  1. Ingress
    1. Ingress Controller (Edge Proxy, Pods): make Ingress resource to work, the cluster must have an ingress controller running. Ingress controllers are not started automatically with a cluster.
      1. AWS: Ingress = Application Load Balancers (L7 HTTP), Service-LoadBalancer = Network Load Balancers (L4). AWS Ingress Controller is a k8s Deployment of Pod
      2. Nginx: It is also pods. In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.
  2. Use Service.Type=LoadBalancer
    1. The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!
  3. Use Service.Type=NodePort



Networking


CNI primarily works at L4 layer whereas service mesh works at L7 layer.

There are lots of different kinds of CNI plugins, but the two main ones are:

  • Network plugins, which are responsible for connecting pods to the network
  • IPAM (IP Address Management) plugins, which are responsible for allocating pod IP addresses

Service Mesh


Service Mesh implementation: Before the sidecar proxy container and application container are started, the Init container started firstly. The Init container is used to set iptables (the default traffic interception method in Istio, and can also use BPF, IPVS, etc.) to Intercept traffic entering the pod to Envoy sidecar Proxy. All TCP traffic (Envoy currently only supports TCP traffic) will be Intercepted by sidecar, and traffic from other protocols will be requested as originally




Comparison: sidecar proxy vs per node vs per service account per node vs shared remote proxy with micro proxy: https://www.solo.io/blog/ebpf-for-service-mesh/
  1. consider resource overhead / feature isolation / security granularity / upgrade impact
  2. For Linkerd: Per-host proxies are significantly worse than sidecars https://buoyant.io/2022/06/07/ebpf-sidecars-and-the-future-of-the-service-mesh 

kube-proxy


kube-proxy is responsible for updating the iptables rules on each node of the cluster. https://betterprogramming.pub/k8s-a-closer-look-at-kube-proxy-372c4e8b090



eBPF & io_uring


eBPF is a virtual machine embedded within the Linux kernel. It allows small programs to be loaded into the kernel, and attached to hooks, which are triggered when some event occurs. This allows the behaviour of the kernel to be (sometimes heavily) customised. While the eBPF virtual machine is the same for each type of hook, the capabilities of the hooks vary considerably. Since loading programs into the kernel could be dangerous; the kernel runs all programs through a very strict static verifier; the verifier sandboxes the program, ensuring it can only access allowed parts of memory and ensuring that it must terminate quickly. https://projectcalico.docs.tigera.io/about/about-ebpf 

io_uring supports linking operations, but there is no way to generically pass the result of one system call to the next. With a simple bpf program, the application can tell the kernel how the result of open is to be passed to read — including the error handling, which then allocates its own buffers and keeps reading until the entire file is consumed and finally closed: we can checksum, compress, or search an entire file with a single system call.

Routing

  1. Cloudflare --proxied--> AWS Route 53 --> ELB (Ingress-managed HA Cloud LoadBalancer) --> EC2 instances (Target Group nodes) --> Ingress Controller Pods (can be deployment or DaemonSet) -> Actual backend pods
    1. Don't use DaemonSet when cluster size is too big - extra burden as each DaemonSet need to connect to k8s API



2021年11月3日星期三

Data Engineering Data Lake

 


Some good readings regarding Data Lake.

Airflow

Airflow serves as an orchestration tool, the whole data flow:

  1. Trigger Airbyte for data from 3rd party to S3
  2. Trigger Spark (EMR) job from ETL S3 to data lake bronze layer
  3. Trigger DBT job from bronze layer to silver layer (Redshift)
  4. Trigger Jupyter notebook script with Papermill operator that has data analysis logic 


GDPR

For GDPR, we should support a mechanism to delete record in raw S3 / bronze / silver / gold layer. Primary key is important, or use period compaction to override changes.


How to evaluate data lake tools

  1. LakeFS
    1. https://lakefs.io/hudi-iceberg-and-delta-lake-data-lake-table-formats-compared/
    2. https://lakefs.io/hive-metastore-why-its-still-here-and-what-can-replace-it/
    3. Data Lakes: The Definitive Guide | LakeFS
      A data lake is a system of technologies that allow for the querying of data in file or blob objects. When employed…lakefs.io
  2. https://www.upsolver.com/blog/getting-data-lake-etl-right-6-guidelines-evaluating-tools
    - ETL/ELT transformation engine
    - GPDR deletion records
    - Object time travel/Data mutation
    - Transaction ACID
    - Streaming Batching
  3. https://www.slideshare.net/databricks/a-thorough-comparison-of-delta-lake-iceberg-and-hudi
  4. https://blog.csdn.net/younger_china/article/details/125926533 数据湖09:开源框架DeltaLake、Hudi、Iceberg深度对比
    1. https://www.infoq.cn/article/fjebconxd2sz9wloykfo
    2. https://eric-sun.medium.com/rescue-to-distributed-file-system-2dd8abd5d80d Delta Lake > Hudi > Iceberg

Data Versioning

lakeFS deletion regarding GDPR
https://medium.com/datamindedbe/what-is-lakefs-a-critical-survey-edce708a9b8e
 
https://lakefs.io/new-in-lakefs-data-retention-policies/

GDPR deletion request: Crypto shredding: How it can solve modern data retention challenges:

  1. 100B key per user
  2. MemoryDB to hold all keys in memory

AWS Datalake Solution





Data Mesh 


Architectural failure modes:
Centralized and monolithic - build experimental data pipeline is slow, let users builds it
Coupled pipeline decomposition - build new dataset depends on other teams
Siloed and hyper-specialized ownership - data engineer doesn't care about data


The next enterprise data platform architecture is in the convergence of Distributed Domain Driven Architecture, Self-serve Platform Design, and Product Thinking with Data.


The key to building the data infrastructure as a platform is (a) to not include any domain specific concepts or business logic, keeping it domain agnostic, and (b) make sure the platform hides all the underlying complexity and provides the data infrastructure components in a self-service manner. There is a long list of capabilities that a self-serve data infrastructure as a platform provides to its users, a domain's data engineers. Here are a few of them:

  • Scalable polyglot big data storage
  • Encryption for data at rest and in motion
  • Data product versioning
  • Data product schema
  • Data product de-identification
  • Unified data access control and logging
  • Data pipeline implementation and orchestration
  • Data product discovery, catalog registration and publishing
  • Data governance and standardization
  • Data product lineage
  • Data product monitoring/alerting/log
  • Data product quality metrics (collection and sharing)
  • In memory data caching
  • Federated identity management
  • Compute and data locality

A success criteria for self-serve data infrastructure is lowering the 'lead time to create a new data product' on the infrastructure.

This paradigm shift requires a new set of governing principles accompanied with a new language:

  • serving over ingesting
  • discovering and using over extracting and loading
  • Publishing events as streams over flowing data around via centralized pipelines
  • Ecosystem of data products over centralized data platform




four underpinning principles that any data mesh implementation embodies to achieve the promise of scale, while delivering quality and integrity guarantees needed to make data usable : 1) domain-oriented decentralized data ownership and architecture, 2) data as a product, 3) self-serve data infrastructure as a platform, and 4) federated computational governance.


Domain ownership

For example, the teams who manage ‘podcasts’, while providing APIs for releasing podcasts, should also be responsible for providing historical data that represents ‘released podcasts’ over time with other facts such as ‘listenership’ over time.


Data as a product

Each domain will include data product developer roles, responsible for building, maintaining and serving the domain's data products. Data product developers will be working alongside other developers in the domain. Each domain team may serve one or multiple data products. It’s also possible to form new teams to serve data products that don’t naturally fit into an existing operational domain.


Self-serve data platform

My personal hope is that we start seeing a convergence of operational and data infrastructure where it makes sense. For example, perhaps running Spark on the same orchestration system, e.g. Kubernetes.


Federated computational governance

striking a balance between what shall be standardized globally, implemented and enforced by the platform for all domains and their data products, and what shall be left to the domains to decide, is an art.

they need to comply with the modeling of quality and specification of SLOs based on a global standard, defined by the global federated governance team, and automated by the platform.






DDD Hexagonal






Application core = business logic

Domain Layer. The objects in this layer contain the data and the logic to manipulate that data, that is specific to the Domain itself and it’s independent of the business processes that trigger that logic, they are independent and completely unaware of the Application Layer

Examples of components can be Authentication, Authorization, Billing, User, Review or Account, but they are always related to the domain. Bounded contexts like Authorization and/or Authentication should be seen as external tools for which we create an adapter and hide behind some kind of port.

The goal, as always, is to have a codebase that is loosely coupled and high cohesive

Trend



https://www.qlik.com/us/-/media/files/resource-library/global-us/register/ebooks/eb-bi-data-trends-2022-en.pdf A competitor can become a partner, a partner can become a customer, and a customer can become a competitor. The solution is not to wall off but to lean in to a new form of competitive edge: generative relationships with mutually beneficial outcomes. Your only option is to become more “interwoven,” creating a trusted ecosystem built on clear rules of engagement.





Data Sharing Is a Business Necessity to Accelerate Digital Business: Gartner predicts that by 2023, organizations that promote data sharing will outperform their peers on most business value metrics. The traditional “don’t share data unless” mindset should be replaced with “must share data unless.”

2021年4月21日星期三

Database and Application

https://tasteturnpike.blogspot.com/2017/03/sre-knowledge.html


https://www.alibabacloud.com/blog/what-are-the-differences-and-functions-of-the-redo-log-undo-log-and-binlog-in-mysql_598035

Redo: Description ensures the durability of transactions and prevents dirty pages from being written to the disk at the point in time of the failure. When the MySQL service is restarted, redo according to the redo log to achieve the durability of the transaction.

Undo: It stores a version of the data before the transaction occurs, which can be used for rollback. At the same time, it can provide reads (MVCC) under Multi-Version Concurrency control, which is read without locking.

Binlog: 
  1. It is used for replication. In master-slave replication, the slave database replays the binlog stored in the master database to achieve master-slave synchronization.
  2. It is used for the database point-in-time restore.



Postgres


  1. Postgres Transaction Isolation
    1. dirty read

      A transaction reads data written by a concurrent uncommitted transaction.

      nonrepeatable read

      A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).

      phantom read

      A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.

      serialization anomaly

      The result of successfully committing a group of transactions is inconsistent with all possible orderings of running those transactions one at a time.

    2. read uncommitted
    3. read committed
      1. Because Read Committed mode starts each command with a new snapshot that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point at issue above is whether or not a single command sees an absolutely consistent view of the database.
    4. repeatable read: sees data committed before the transaction began; it never sees either uncommitted data or changes committed during transaction execution by concurrent transactions
      1. create a snapshot for transaction to ensure consistency read. 但如果其它transaction 此时改了此行,则retry
      2. Applications using this level must be prepared to retry transactions due to serialization failures.
    5. serializable: This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently. However, like the Repeatable Read level, applications using this level must be prepared to retry transactions due to serialization failures. In fact, this isolation level works exactly the same as Repeatable Read except that it monitors for conditions which could make execution of a concurrent set of serializable transactions behave in a manner inconsistent with all possible serial (one at a time) executions of those transactions. This monitoring does not introduce any blocking beyond that present in repeatable read, but there is some overhead to the monitoring, and detection of the conditions which could cause a serialization anomaly will trigger a serialization failure.
      1. serialization failure: 并发transaction互相影响,不同执行顺序会造成不同结果
      2. predicate locking (These will show up in the pg_locks system view with a mode of SIReadLock) : 检测write是否会对并发transaction造成影响。In PostgreSQL these locks do not cause any blocking and therefore can not play any part in causing a deadlock. They are used to identify and flag dependencies among concurrent Serializable transactions which in certain combinations can lead to serialization anomalies. In contrast, a Read Committed or Repeatable Read transaction which wants to ensure data consistency may need to take out a lock on an entire table, which could block other users attempting to use that table, or it may use SELECT FOR UPDATE or SELECT FOR SHARE which not only can block other transactions but cause disk access.
        1. Serializable predicate locking性能好于explicit locks
      3. PostgreSQL's Serializable transaction isolation level only allows concurrent transactions to commit if it can prove there is a serial order of execution that would produce the same effect 如果能提前检查unique constraints violation,则尽量在transaction前检测
      4. 优化
        1. Control the number of active connections, using a connection pool if needed. This is always an important performance consideration, but it can be particularly important in a busy system using Serializable transactions.
        2. Eliminate explicit locks, SELECT FOR UPDATE, and SELECT FOR SHARE where no longer needed due to the protections automatically provided by Serializable transactions.
  2. https://zhuanlan.zhihu.com/p/54979396 Snapshot Isolation综述 
  3. Linearizability, serializability, transaction isolation and consistency models
  4. The most common isolation level implemented with MVCC is snapshot isolation
    1. MVCC introduces the challenge of how to remove versions that become obsolete and will never be read. In some cases, a process to periodically sweep through and delete the obsolete versions is implemented. This is often a stop-the-world process that traverses a whole table and rewrites it with the last version of each data item. PostgreSQL can use this approach with its VACUUM FREEZE process

Postgres DB vaccum and query conflict

https://www.postgresql.org/docs/9.2/hot-standby.html#HOT-STANDBY-CONFLICT

https://www.cybertec-postgresql.com/en/what-hot_standby_feedback-in-postgresql-really-does/ "hot_standby_feedback" we can teach the standby to periodically inform the primary about the oldest transaction running on the standby. If the primary knows about old transactions on the standby, it can make VACUUM keep rows until the standbys are done.



LevelDB RocksDB



  1. The main challenge is that the Flash cells can only be deleted block-wise and written on page-wise. To write new data on a page, it must be physically totally empty. If it is not, then the content of the page has to be deleted. However, it is not possible to erase a single page, but only all pages that are part of one block. Because the block sizes of an SSD are fixed — for example, 512kb, 1024kb up to 4MB. — a block that only contains a page with only 4k of data, will take the full storage space of 512kb anyway.
    1. SSD需要把write分散到各处芯片防止wear leveling导致性能下降
  2. Delete tombstone会被一直compact直到最下层,然后再删除key
  3. 常用的key都在L0里,反之都compact到下一层。为了加快,是用bloomfilter确定key是不是存在数据库里
  4. L0: overlapping keys, sorted by flush time. files are sorted based on the time they are flushed. Their key range (as defined by FileMetaData.smallest and FileMetaData.largest) are mostly overlapped with each other. So it needs to look up every L0 file.
  5. L1+: non-overlapping keys, sorted by key


Lock

How to implement mutex?
  • One way is using Test-and-set (spinlock)
  • Futexes have the desirable property that they do not require a kernel system call in the common cases of locking or unlocking an uncontended mutex. In these cases, the user-mode code successfully uses an atomic compare and swap (CAS)
Test-and-set

  • Test-and-set: the location value could only be set after passing test. Supported at machine-level (CPU instruction support): is an instruction used to write 1 (set) to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation
    • while (test_and_set(lock) == 1); # The calling process obtains the lock if the old value was 0 otherwise while-loop spins waiting to acquire the lock. This is called a spinlock.
  • Test and test-and-set chooses not spin on test_and_set(), it spins on checking whether the shared lock variable seems free
  • Performance: When processor P1 has obtained a lock and processor P2 is also waiting for the lock, P2 will keep incurring bus transactions in attempts to acquire the lock. When a processor has obtained a lock, all other processors which also wish to obtain the same lock keep trying to obtain the lock by initiating bus transactions repeatedly until they get hold of the lock. This increases the bus traffic requirement of test-and-set significantly. This slows down all other traffic from cache and coherence misses. It slows down the overall section, since the traffic is saturated by failed lock acquisition attempts. Test-and-test-and-set is an improvement over TSL since it does not initiate lock acquisition requests continuously.
Spinlock
  • Pro: it avoids overhead from operating system process rescheduling or context switching, spinlocks are efficient if threads are likely to be blocked for only short periods.。因此一些多线程同步机制不使用切换到内核态的同步对象,而是以用户态的自旋锁或其衍生机制(如轻型读写锁)来做同步,付出的时间复杂度相差3个数量级
  • Con: 单核单线程的CPU不适于使用自旋锁 -> 死机. CPU time wasted



Nginx



On this four‑core server, the NGINX master process creates four worker processes and a couple of cache helper processes which manage the on‑disk content cache.

Nginx processes share the same socket: If accept_mutex is enabled, worker processes will accept new connections by turn


With the SO_REUSEPORT option enabled, there are multiple socket listeners for each IP address and port combination, one for each worker process.




Rsync: incremental backup

  1. Compare directory differences: check subtree files
  2. Compare whether need to update file: check file metadata: mtime, size
  3. Transfer only needed data: f_new calculates the rolling hash with a sliding window




SAP

Availability

Standby server will check each HADR member to determine if it is eligible for promotion: https://help.sap.com/docs/SAP_ASE/efe56ad3cad0467d837c8ff1ac6ba75c/a6c69a21bc2b1014adda8a01ba6488fc.html -- However, network partition in HADR members will prevent all standby servers from promotion.

Scalability

Table partition is the solution 



2018年10月28日星期日

Deep Dive, Long term

职业规划

Philosophy
  • Titles lead to labels; labels lead to stereotypes; stereotypes lead to bias
  • No walls between roles
  • Change perceptions; think differently

Double proxy

A -> A sidecar -> proxy -> B
  • key points: easy to rollback misconfigures
  • 99% connection pool size decrease
  • TLS handshake latency is reduced for p99 due to warm connection
  • connection reuse rate increased
  • security enhancement as a side effect
  • refactoring hard coded configs for easy test/rollback without rebuilding package
  • gated slow rollout
  • troubleshooting related issues

Private and Public cloud migration

  • key points: easy to rollback misconfigures
  • security: address security risks as it will face the Internet directly, configure the right OneLogin and firewalls/security group
    • CSRF middleware
    • Security test: Wireshark and ngrep analyze request/response
    • Session in NodeJS
  • networking/service discovery:
    • IP tunnel with allocated subnet CIDR
    • private -> public is easy to configure
    • public -> private needs to hack as there is no routing service setup in public cloud
  • elastic scaling along with the workload
  • developer: completely new development workflow and build pipeline (dockerization)
  • migration plan
    • storage migration plan A: dump mysql; shutdown service; import
      • plan B: mysql master-slave replication; script switching master-slave
    • DNS migration: private DNS's DNS redirect to public DNS
      • New domain's DNS certificate needs double authentication
  • infrastructure as a code: to spawn a deployment easily
  • budget is more clear
  • communication is slow as lots of unknown

Retrospect

  • Refactoring vs Fixing existing broken tools
  • Asking for permissions to finish job
  • A-B team highly coupled but not cooperative: show them the impact for finishing one project; fly there to meet in person

2018年10月13日星期六

Troubleshooting

Troubleshooting

2018年2月5日星期一

Coding, Algorithm, Data Structure


相比CSP模型,Actor模型可以跨节点在分布式集群中运行: 存在信箱满后消息丢失的问题


Python


  1. Python Project Structure
  2. Google Python Style Guide
  3. PEP 8 -- Style Guide for Python Code
  4. https://buckbuild.com/function/glob.html

Go


Coding


Why recursion is heavy? Stack frames may be too many which contains call's return address, local data, and parameters

算法