Object-Oriented Programming
What is Object-Oriented Programming?
Object-oriented programming is a programming paradigm that involves the concept of objects. An object in OOP is a basic unit of a program, containing both data and functions to operate the data. In OOP, programs are created by defining classes that define the properties and methods of objects. A class is a template or blueprint that defines a set of properties and methods that can be shared by all instances of that class.
There are many advantages to object-oriented programming.
- Firstly, code reusability is possible. Since class definitions and methods are encapsulated within classes, they can be used in multiple programs, improving code reuse.
- Secondly, code maintenance is easier, as changes to a class do not affect other programs.
- Thirdly, code is more readable, as it is modular and structured due to the encapsulation of classes and methods.
- Finally, code reliability is increased due to develop method such as encapsulation and inheritance.
However, there are also some disadvantages to object-oriented programming.
- For example, performance issues may arise due to develop method like encapsulation and inheritance.
- Complexity issues may also arise due to the involvement of multiple concepts and technologies in OOP.
Overall, object-oriented programming is beneficial for development, leading many projects to use languages like Java that support OOP.
Example of Object-Oriented Programming
Consider a system for managing customer information. Each customer who comes to a bank can be considered an object with attributes such as ID, name, age, and gender. They will also have methods for booking an appointment, reserving a seat, and depositing money. To create a customer object, we need to provide a constructor method. To easily modify customer information, we can add getter and setter methods. When a new customer applies for service, we call the constructor method to create a customer object. When the customer wants to change their information, we call the getter and setter methods.
Spring MVC
What is Spring MVC?
Spring MVC is a lightweight web development framework provided by the Spring framework based on the Model-View-Controller (MVC) pattern. It provides a complete solution for expressing layers (UI) development for the Spring framework. It uses the MVC architecture pattern's idea to decompose a complex web application into three layers: Model, Controller, and View. This effectively simplifies web application development and reduces the risk of errors.
The responsibilities of each layer in Spring MVC are fixed.
- Model: Responsible for processing requests and returning results to the Controller.
- View: Responsible for rendering the results of request processing and presenting them to the client browser.
- Controller: Responsible for interacting between the Model and View. It mainly receives user requests, calls the Model to process the requests, and then passes the results from the Model to the View.
Spring MVC is essentially a further encapsulation of Servlets. Its core component is DispatcherServlet, which acts as the front-end controller for Spring MVC. It primarily handles and distributes requests and responses uniformly. The requests received by the Controller are actually distributed by DispatcherServlet according to certain rules. Spring MVC has highly configurable modular components due to its loose coupling and pluggable structure compared to other MVC frameworks, providing greater scalability and flexibility. Additionally, Spring MVC supports both annotation-driven and REST styles.
Process of Spring MVC
- When a user sends a request to the server, the request is captured by the Spring front-end controller DispatcherServlet.
- DispatcherServlet parses the request URL to obtain the request URI.
- It then calls HandlerMapping to obtain all relevant objects including Handler objects and their corresponding interceptors for the configured Handler based on the Handler configuration.
- Finally, it returns an object of type HandlerExecutionChain in the form of a handler chain.
- The HandlerExecutionChain object is responsible for calling the Controller to process the request.
- After processing the request, the Controller returns an object of type ModelAndView to the HandlerExecutionChain object.
- The HandlerExecutionChain object converts the ModelAndView object into a View object and returns it to the user.
Java AOP
AOP is a programming concept that supplements object-oriented programming. AOP can intercept specific methods and dynamically add extra functionality to a program without modifying the source code, thereby separating cross-cutting concerns. In Java, the implementation of AOP mainly relies on the AOP support provided by the Spring framework. Spring AOP provides an AOP implementation based on proxy, which encapsulates common behaviors and logic that affect multiple objects but are irrelevant to business logic, reducing code repetition, coupling between modules, and improving maintainability. We use AOP for logging, cache processing, handling Spring internal transactions, etc.
Spring Boot
Principle of Spring Boot Auto-Configuration
The principle of Spring Boot Auto-Configuration is based on conditional annotations. Spring Boot determines whether to add configuration classes to the container based on information such as classpath jars, classes, and properties. If configuration classes need to be added, Spring Boot will automatically configure the underlying framework, such as Tomcat, Redis, or MySQL, based on conditions. If no configuration classes need to be added, no action will be taken on the container.
@SpringBootApplication encapsulates @SpringBootConfiguration, @EnableAutoConfiguration, and @ComponentScan annotations.
@EnableAutoConfiguration is the core implementation of automatic configuration. This annotation imports corresponding configuration selectors via @Import. It reads all the fully qualified names of configuration classes in the spring.factories file under the project's classpath. The beans defined in these configuration classes will be imported into the Spring container based on conditions specified by @ConditionalOnClass annotations.
There will be conditional checks like @ConditionalOnClass to determine if there is a corresponding class file.
Spring Bean
Is a singleton bean in Spring thread-safe?
The beans in the Spring container themselves are not thread-safe because in a multithreaded environment, there may be concurrent access to the same bean, leading to data consistency issues. However, this does not mean that singleton beans provided by the Spring container cannot be used in a multithreaded environment.
Specifically, the implementation of singleton beans in the Spring container is thread-safe due to synchronization mechanisms used within the container to ensure that multiple threads do not interfere with each other when accessing the same bean simultaneously. Specifically, when one thread is accessing a singleton bean, other threads will be blocked until the current thread completes its operations on the bean.
Note that if a singleton bean has state (i.e., member variables or states that need to be shared and modified by multiple threads), it is not thread-safe. In such cases, locking mechanisms or other methods need to be employed to ensure that multiple threads do not interfere with each other when accessing the same singleton bean, thereby avoiding data inconsistency issues.
Lifecycle of Spring Beans
The lifecycle of a Spring Bean includes several steps:
- Instantiation: The Spring container creates a Bean instance through reflection and allocates memory space for it.
- Property Injection: Based on the dependency relationship defined in the Bean definition, the Spring container injects dependent objects into the Bean. This process can be achieved through either an XML configuration file or annotations.
- Initialization: After the Bean is instantiated, the Spring container executes a series of initialization operations, including pre-initialization advice (Before-Init), init-method execution (init-method), and post-initialization advice (After-Init). These notifications can be specified in the Bean definition using elements.
- Use of Bean: Once the Bean finishes initialization, it is handed over to the application for use. The ApplicationContext interface can be used to obtain a Bean instance and call its methods to complete business logic.
- Destruction of Bean: When the application shuts down, the Spring container executes a series of destruction operations, including pre-destruction advice (Before-Destroy), destroy-method execution (destroy-method), and post-destruction advice (After-Destroy). These notifications can also be specified in the Bean definition using elements.
In Spring, each Bean is singleton, meaning there is only one instance throughout the entire application. If you need to create multiple instances of the same type of Bean, you can set the scope of the Bean as prototype using @Scope annotation to allow for instantiation every time it is requested.
Circular Dependencies in Spring Beans
Circular dependencies refer to a situation where two or more Beans mutually hold references to each other, forming a closed cycle. In such cases, the Spring container cannot correctly create Bean instances because each Bean depends on other beans, and these beans also depend on the first Bean.
Spring resolves circular dependencies through special bean generation mechanisms, allowing all chained bean objects to be correctly created. If circular dependencies are caused by field property injection, Spring容器 will handle them properly. However, if circular dependencies are introduced through constructor injection, Spring容器 will throw an exception.
During handling circular dependencies, Spring container employs several strategies:
- If circular dependencies are detected, Spring container attempts to create a Bean instance using singleton mode.
- If singleton mode fails, Spring container selects different resolution strategies based on the order and type of circular dependency.
- For different types of Beans, Spring container selects different resolution strategies, such as using proxies or lazy initialization.
- If none of the above methods can resolve the issue, Spring container throws an exception, prompting developers to redesign the dependency relationships between Beans.
To avoid circular dependencies, it is recommended by Spring that developers avoid injecting dependencies directly via setters and instead opt for constructor injection or attribute injection. Moreover, if setters are necessary for dependency injection, developers can use the @Lazy annotation to delay the initialization time of beans and prevent circular dependency issues.
Popular Spring Framework Annotations
Spring Core
| Annotation | Explanation |
|---|---|
| @Component, @Controller, @Service, @Repository | Use on classes for instantiating beans |
| @Autowired | Inject based on type |
| @Scope | Marking the scope of a bean |
| @Configuration | Identifying Spring configuration classes for creating the container |
| @ComponentScan | Identifying packages to scan for Spring components during initialization |
| @Bean | Use on methods to store their return values in the container |
| @Before, @After | Used for AOP |
SpringMVC
| Annotation | Description |
|---|---|
| @RequestMapping | Maps the request path, which can be defined on a class or an method. If defined on a class, it indicates that all methods in the class use this address as the parent path. |
| @RequestBody | Receives JSON data from the HTTP request and converts it into a Java object. |
| @RequestParam | Specifies the name of the request parameter. |
| @PathVariable | Obtains the request parameter from the request path and passes it as a parameter to the method. |
| @ResponseBody | Converts the returned object from the method into a JSON response and returns it to the front-end. |
| @RequestHeader | Retrieves request header data. |
| @RestController | A shortcut for @Controller + @ResponseBody, used to define a controller that handles incoming HTTP requests and returns a JSON response. |
Springboot
| Annotation | Description |
|---|---|
| @SpringBootConfiguration | Implements the functionality of a configuration file |
| @EnableAutoConfiguration | Turns on automatic configuration |
| @ComponentScan | Scans for SpringBoot components |
Why does the system need to be decomposed into so many services, and combined to provide functions as a whole system?
There are many benefits to decomposing a system into multiple services.
Firstly, by decomposing a massive monolithic application into multiple service methods, we can solve the complexity problem. With the same functionality, the application is divided into manageable branches or services, each with clear boundaries. The microservice architecture pattern provides a modular solution to achieve functions that would be difficult to implement using a monolithic coding approach. As a result, individual services can be developed, understood, and maintained easily.
Secondly, this architecture allows each service to have a dedicated development team. Developers can freely choose the development technology and provide API services.
Then, this freedom means that developers do not have to use outdated technologies used at the beginning of the project. They can choose current technologies. Even if it is difficult to rewrite previous code using modern technologies for services that are relatively simple, it is still possible.
Microservice architecture is an architectural concept that decomposes functionality into discrete services in order to achieve decoupling of solutions. It has several key concepts:
- Service: A basic unit of the microservice architecture pattern is a service. Each service should encapsulate a set of related business functions and can be deployed, upgraded, and scaled independently.
- Protocol: Communication between microservices can use various protocols such as REST, SOAP, AMQP, etc.
- Data storage: Since microservices are independent, separate data storage management is required. Each service can use different databases or storage technologies.
- Deployment: In microservice architecture, each service can be deployed independently. This means that each service can run in different environments and can be adjusted as needed.
RDBMS and NoSQL: What Are the Differences?
The main differences between Relational Database Management Systems (RDBMS) and NoSQL databases lie in their data storage methods and query languages. RDBMS uses structured query language (SQL) to query data, while NoSQL databases use different query languages such as key-value pairs, document, or column families. Additionally, data in RDBMS is stored in a table format, whereas NoSQL databases can use various data structures such as hash tables, graphs, or tree structures.
The advantages of RDBMS include:
- Strong query capability provided by structured query language (SQL).
- Support for transaction processing, ensuring data consistency and integrity.
- Support for data integrity constraints, ensuring accurate data.
- Support for advanced data analysis and reporting tools.
The disadvantages of RDBMS include:
- Inability to handle large amounts of unstructured data.
- Lower performance compared to NoSQL databases.
- Increased management workload such as backup, recovery, and monitoring.
The advantages of NoSQL databases include:
- Ability to handle large amounts of unstructured data.
- High performance and scalability.
- High flexibility to adjust according to needs.
What are the methods for authentication in microservices?
- Single Sign-On (SSO): Each service facing users must interact with an authentication service, resulting in a large amount of network traffic.
- Token Bucket Algorithm: This algorithm uses a token bucket to control access rates. Each request results in a token being released, and if the request requires more tokens, it must wait for the next batch to be available.
- OAuth2: OAuth2 is an authorization framework that allows third-party applications to access information stored on another service provider's platform with user consent.
- OpenID Connect: OpenID Connect is an identity verification protocol based on OAuth2 used to authenticate users and authorize access to protected resources.
When to poll micro spaces in a distributed system?
Polling microspaces are a mechanism used in distributed systems to check for updates or new messages from other nodes. When having polling microspaces, there is usually a designated node responsible for monitoring the status of other nodes in the system and checking for any changes. This node will regularly send requests to other nodes to retrieve their current state or receive new messages. It then processes the responses from the other nodes and uses them to update the state of the polling microspace.
In a distributed system, since individual nodes may be located in different physical locations or operate at different times, there is a need for a mechanism to coordinate and synchronize communication and state information between these nodes. This is where polling microspaces come into play. By using polling microspaces, it ensures that various parts of the system always have the latest state and can respond promptly to requests from other nodes when needed.
The steps involved in polling microspaces typically include:
- Determine which nodes need to be monitored: In a distributed system, there may be multiple nodes that need to be monitored to obtain updates or new messages. These nodes are typically critical components or data storage locations.
- Choose an appropriate communication protocol: To communicate with these nodes, an appropriate communication protocol needs to be selected. Examples include HTTP, REST API, or other lightweight protocols.
- Regular polling: Once the nodes to be monitored and the chosen communication protocol are determined, regular polling of these nodes is necessary to check their current status or receive new messages. This can be achieved using timers or scheduled tasks.
- Process responses: When a node is polled, its response needs to be processed. This may include retrieving information about the node (such as its current status) or receiving new messages from other nodes.
- Update polling microspace: Finally, the responses received from other nodes need to be used to update the polling microspace. This helps ensure that the system's state is always up-to-date and can respond promptly to requests from other nodes when needed.
It should be noted that polling microspaces may result in performance issues since they require regular polling of individual nodes to check their status. Additionally, if the system fails or network interruptions occur, polling microspaces may become unreliable. Therefore, considerations should be made regarding how to optimize performance and improve system reliability when using polling microspaces.
Database
Features of Transactions
ACID refers to the four characteristics that a transaction (transaction) in a reliable database management system (DBMS) should have: atomicity, consistency, isolation, and durability.
These four features are standards for evaluating whether a transaction can meet the requirements of data processing and are also the foundation for ensuring the correct, efficient, and safe operation of the database system.
- Atomicity (Atomicity): Refers to the fact that a transaction is an indivisible unit of work. The operations in a transaction either complete entirely or not at all.
- Consistency (Consistency): Refers to the fact that the database changes from one consistent state to another state before and after the execution of a transaction.
- Isolation (Isolation): Refers to the fact that when multiple users access the database concurrently, one user's transaction cannot be interfered with by other users' transactions, and the data between multiple concurrent transactions must be isolated from each other.
- Durability (Durability): Refers to the fact that once a transaction is submitted, its impact on the database is permanent.
Problems Brought by Concurrent Transactions
- Dirty Read (Dirty Read): A transaction reads data from another uncommitted transaction. When a transaction is accessing and modifying data and has not yet submitted it to the database, another transaction also accesses this data and uses it. Since this data is uncommitted data, the other transaction may read "dirty" data and perform operations based on this "dirty" data, which may be incorrect.
- Non-repeatable Read (Non-repeatable Read): Within a single transaction, two reads of the same data result in different results. The same data is modified or deleted as initially selected by the user.
- Phantom Read (Phantom Read): During a transaction process, the results of multiple queries are inconsistent. The first query returns rows A, the second query returns rows B, and the third query returns rows A again.
- Lost Update (Lost Update): When two or more transactions select the same row and then update this row based on the originally selected values, due to the unawareness of each transaction of the existence of other transactions, a lost update problem occurs: the final update overwrites the updates made by other transactions.
Transaction Isolation Levels
There are four levels of database transaction isolation levels, ranked from low to high as follows: Read uncommitted, Read committed, Repeatable read, and Serializable.
- Read uncommitted: Allows dirty reads, non-repeatable reads, and phantom reads.
- Read committed: Prevents dirty reads but allows non-repeatable reads and phantom reads.
- Repeatable read: Prevents dirty reads and non-repeatable reads but allows phantom reads.
- Serializable: Prevents dirty reads, non-repeatable reads, and phantom reads.
MySQL Default Isolation Level
The default isolation level of MySQL is Repeatable Read.
Differences between Undo Log and Redo Log
Undo log and redo log are two important concepts in database transactions. The undo log is used to record the state before the transaction starts, and it is used for rollback operations when a transaction fails. The redo log records the state after the transaction is executed, and it is used to restore the data that has been successfully updated by a transaction.
The difference between the undo log and the redo log lies in that the undo log is a logical log, while the redo log is a physical log. In MySQL, each SQL statement executed generates a corresponding undo log and a redo log. When a transaction is committed, MySQL writes the information in the undo log into disk, and writes the information in the redo log into disk cache. When a transaction is rolled back, MySQL performs a rollback operation based on the information in the undo log, and reads the latest data from the redo log for recovery.
Multi-Version Concurrency Control (MVCC)
MVCC is an abbreviation of Multi-Version Concurrency Control, which is a concurrency control method commonly used in database management systems to implement concurrent access to databases. Its purpose is to avoid competition between different data items within different transactions, thereby improving the concurrency performance of the system.
The characteristics of the MVCC mechanism are as follows:
- It allows multiple versions to exist simultaneously and execute concurrently.
- It does not rely on lock mechanisms, resulting in high performance.
- It only works under read-committed and repeatable-read transaction isolation levels.
Principle of MySQL Master-Slave Synchronization
The core of MySQL master-slave synchronization is binary logs. Binary logs (BINLOG) record all Data Definition Language (DDL) statements and Data Manipulation Language (DML) statements, but do not include data query (SELECT, SHOW) statements.
- When a transaction is committed by the master database, the changes to data are recorded in the binary log file Binlog.
- The slave database reads the binary log file Binlog and writes it into its relay log Relay Log.
- The slave database replays events in the relay log, reflecting changes in its own data.
MyBatis
Execution Process of MyBatis
- Reads the MyBatis configuration file: mybatis-config.xml to load the runtime environment and mapping files
- Constructs the SessionFactory (SqlSessionFactory)
- The SessionFactory creates a SqlSession object, which contains all methods for executing SQL statements
- The interface for operating the database, the Executor executor, also takes responsibility for maintaining query cache
- The parameter of the Executor interface's execution method is a MappedStatement type, encapsulating mapping information
First Level Cache and Second Level Cache in MyBatis
The first level cache is based on the PerpetualCache HashMap local cache. Its storage scope is the Session. When the Session flushes or closes, all the Cache in that Session will be cleared. By default, the first level cache is turned on.
The second level cache is based on the namespace and mapper scope. It does not depend on the SQL session. It also uses the PerpetualCache, HashMap for storage by default. It needs to be enabled separately.
When a certain scope (first level cache Session/second level cache Namespaces) performs an add, modify, or delete operation, it clears the cache in all select statements under that scope by default.