Best practice: all Postgres tables can have these columns

They should all have these columns:

  • Primary key: id , bigserial (which creates a sequence behind the scene)
  • Date created: utc_created – Use current timestamp as a default value
  • Date updated: utc_updated – Use a trigger to automatically update it (See code below)
  • Creator: created_by – text type, can be anything
  • Updator: updated_by – text type , can be anything

Code:

create table my_table
(
    id          bigserial      not null ,
    
    utc_created timestamptz not null default current_timestamp,
    created_by  text        not null,
    utc_updated timestamptz null,
    updated_by  text        null,

    primary key (id)
);


CREATE TRIGGER my_table_updated_by_trigger BEFORE UPDATE
    ON my_table FOR EACH ROW EXECUTE PROCEDURE
    update_utc_updated_column(); -- see below
-- See https://www.revsys.com/tidbits/automatically-updating-a-timestamp-column-in-postgresql/

CREATE OR REPLACE FUNCTION update_utc_updated_column()
    RETURNS TRIGGER AS $$
BEGIN
    NEW.utc_updated = current_timestamp;
    RETURN NEW;
END;
$$ language 'plpgsql';

Timezone choice for Postgre and things to do in Java

Always use UTC as the timezone for postgre timestamp/date columns, which is considered to be the best practice and recommended by Postgre, e.g.

utc_created timestamptz not null

And you java application is normally using LocalDateTime/LocalDate, if your system is not a global operation.

So there needs to be a conversion in data access layer of you java application. According to some documents, Timestamp with Timezone in Postgresql should be mapped to OffsetDateTime in Java, and vice versa:

import java.time.LocalDateTime;
import java.time.OffsetDateTime;
import java.time.ZoneId;
import java.time.ZonedDateTime;

public class MyTimeZoneHelper {
    private static final ZoneId MY_TIMEZONE_ID = ZoneId.of("Australia/Melbourne");

    public static LocalDateTime toLocalDateTime(OffsetDateTime offsetDateTime) {
        ZonedDateTime zonedDataTime = offsetDateTime.toZonedDateTime();
        return zonedDataTime.withZoneSameInstant(MY_TIMEZONE_ID).toLocalDateTime();
    }


    public static OffsetDateTime toOffsetDateTime(LocalDateTime localDateTime) {
        ZonedDateTime zonedDateTime = localDateTime.atZone(MY_TIMEZONE_ID);
        ZonedDateTime utcZonedDateTime = zonedDateTime.withZoneSameInstant(ZoneId.of("Z"));
        return utcZonedDateTime.toOffsetDateTime();

    }
}

And if you are using mybatis, you can have a type handler based on the helper above:

 
import org.apache.ibatis.type.BaseTypeHandler;
import org.apache.ibatis.type.JdbcType;
import org.apache.ibatis.type.OffsetDateTimeTypeHandler;
import org.apache.ibatis.type.TypeHandler;

import java.sql.CallableStatement;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.time.LocalDateTime;
import java.time.OffsetDateTime;

/**
 * Convert the UTC timestamps in the new databases (postgres) to LocalDateTime, and vice versa
 */
public class MyTimestampWithTimezoneTypeHandler extends BaseTypeHandler<LocalDateTime> implements TypeHandler<LocalDateTime> {

    private OffsetDateTimeTypeHandler delegate = new OffsetDateTimeTypeHandler();


    @Override
    public void setNonNullParameter(PreparedStatement ps, int i, LocalDateTime parameter, JdbcType jdbcType) throws SQLException {
        delegate.setNonNullParameter(ps, i, MyTimeZoneHelper.toOffsetDateTime(parameter), jdbcType);
    }

    @Override
    public LocalDateTime getNullableResult(ResultSet rs, String columnName) throws SQLException {
        OffsetDateTime offsetDateTime = delegate.getNullableResult(rs, columnName);
        return offsetDateTime == null ? null : MyTimeZoneHelper.toLocalDateTime(offsetDateTime);
    }

    @Override
    public LocalDateTime getNullableResult(ResultSet rs, int columnIndex) throws SQLException {
        OffsetDateTime offsetDateTime = delegate.getNullableResult(rs, columnIndex);
        return offsetDateTime == null ? null : MyTimeZoneHelper.toLocalDateTime(offsetDateTime);
    }

    @Override
    public LocalDateTime getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {
        OffsetDateTime offsetDateTime = delegate.getNullableResult(cs, columnIndex);
        return offsetDateTime == null ? null : MyTimeZoneHelper.toLocalDateTime(offsetDateTime);
    }
}
<resultMap>
...
 <result column="utc_created" jdbcType="TIMESTAMP_WITH_TIMEZONE" property="createdWhen"  typeHandler="MyTimestampWithTimezoneTypeHandler"/>
...

</resultMap>

MyBatis dynamic “union all”

Take a look at the following union sql fragment. What if includeFoo is false?

    <sql id="dynamic_union">
      

            <if test="includeFoo">
                (select * from foo)
            
            </if>

            <if test="includeBar">
                union all
                (select * from bar)
            </if>
 

    </sql>

You will end up with

union all (select * from bar)

, which is not legal sql. There must be a result set before “union all”

The solution is to use <trim/> tag

    <sql id="dynamic_union">
        <trim  prefixOverrides="union all">


            <if test="includeFoo">
                (select * from foo)
            
            </if>

            <if test="includeBar">
                union all
                (select * from bar)
            </if>
 

        </trim>


    </sql>

It means: if what’s inside <trim/> starts with “union all”, then “union all” will be removed

Code snippet: subListSafely()

    /**
     * @param list      can be null
     * @param fromIndex inclusive
     * @param endIndex  exclusive
     * @param <T>
     * @return
     */
    public static <T> List<T> subListSafely(List<T> list, int fromIndex, int endIndex) {
        if(list == null){
            return new ArrayList<>();
        }

        if(fromIndex < 0 || endIndex <= 0){
            return new ArrayList<>();
        }

        if (list.size() - 1 < fromIndex) {
            return new ArrayList<>();
        }
        return list.subList(fromIndex, Math.min(endIndex, list.size()));
    }
}

And the unit test:

    @Test
    public void subListSafely(){
        // null list
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(null, 0, 3).toArray());

        //empty list
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(new ArrayList<String>(), 0, 3).toArray());
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(new ArrayList<String>(), 1, 3).toArray());

        //non-empty list, increasing fromIndex one by one
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), -1, 3).toArray());
        assertArrayEquals(new Object[]{"a", "b", "c"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 3).toArray());
        assertArrayEquals(new Object[]{"b", "c"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 1, 3).toArray());
        assertArrayEquals(new Object[]{"c"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 2, 3).toArray());
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 3, 3).toArray());
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 4, 3).toArray());

        //non-empty list, decrease endIndex one by one
        assertArrayEquals(new Object[]{"a", "b", "c"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 4).toArray());
        assertArrayEquals(new Object[]{"a", "b", "c"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 3).toArray());
        assertArrayEquals(new Object[]{"a", "b"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 2).toArray());
        assertArrayEquals(new Object[]{"a"}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 1).toArray());
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, 0).toArray());
        assertArrayEquals(new Object[]{}, MyCollectionUtils.subListSafely(Arrays.asList("a", "b", "c"), 0, -1).toArray());
    }

Use Flyway in a multi-module gradle project

Requirements

  • Some of the sub-projects need to run flyway as a gradle task
  • We want a single command line to do the flyway migrations for all the sub-projects mentioned above

Solution

  • Apply flyway plugin in these sub-projects
  • Config flyway in these sub-projects
  • Create a root-project task to call the flyway tasks in the sub-projects

Code

buildscript {
    dependencies {
        classpath 'org.postgresql:postgresql:42.2.10' //or any other jdbc driver
}

plugins {
    id "org.flywaydb.flyway" version "6.2.4" apply false //get it in but donot apply it for root project
}

task dbMigrate(){ //root project task
    dependsOn "foo:flywayMigrate"
}

...
project(':foo') {  //a sub project
    apply plugin: "org.flywaydb.flyway" 
    def dbHost = findProperty('fooDbHost')

    flyway{
        url = "jdbc:postgresql://${dbHost}:5432/xxx"
        user = findProperty('fooDbUsername')
        password = findProperty('fooDbPassword')
    }
...
}

...
project(':bar') {  //a sub project
    apply plugin: "org.flywaydb.flyway" 
    def dbHost = findProperty('barDbHost')

    flyway{
        url = "jdbc:postgresql://${barDbHost}:5432/xxx"
        user = findProperty('barDbUsername')
        password = findProperty('barDbPassword')
    }
...
}

Now run it on the root level of your project:

./gradlew dbMigrate -PfooDbHost=xxx -PfooDbUsername=xxx -PfooDbPassword=xxx  -PbarDbHost=xxx -PbarDbUsername=xxx -PbarDbPassword=xxx





Surprising features of postgres

Disclaimer: Not sure if they will surprise you

A role can be a user

This is STUPID ! It’s anti common sense.

utf8 charset supports 4-byte chars

In mysql you have to specify the charset to be “utf8mb4”, but in postgres it can just be “utf8”

When you create a new database, any role is allowed to create objects in the public schema of this database

This is crazy. You should revoke the creating permissions of other roles. See https://stackoverflow.com/questions/760210/how-do-you-create-a-read-only-user-in-postgresql/39029296#39029296

Not every kind of state needs to go to Redux

Only some kinds of state need to be in Redux,

  • State that’s shared by multiple components on the same page
    • State that’s read by multiple components
    • State that’s written by one component and read by another
  • State whose lifecycle is longer than its component. For example, “recordList” for a list page – If you go to a detail page from a list page, and then come back, you don’t want to re-fetch the list from the backend. With “recordList” in redux you can just read it from the Redux store.

Typical cases that you don’t need Redux for you state:

  • Form values – And this is why Formix is better than Redux-Form
  • “currentRecord” for a detail page

Best practice of html5 input for limited length

What you need is ,

  • Only numbers can be input
  • The length of input is limited to N, say 2
  • On mobile devices, a number pad will be shown

And the way to do it is:

<input type="number" pattern="[0-9]*" oninput="this.value=this.value.slice(0,2)"/>

If you are using material-ui, it will be

<TextField
    inputProps={{
        type: "number",
        pattern: "[0-9]*" 
    }}

    onInput={(e: any)=>{
        e.target.value = e.target.value.slice(0,2)
    }}
/>

Generate assertEquals(xxx, bean.getXxx()) for all the properties of a javabean

In intellij, create such a live template:

    public static void main(String args[]) {
        Class<?> clazz = $Bean$.class;
        String object = "$bean$";
        java.lang.reflect.Method[] methods = clazz.getMethods();
        for (java.lang.reflect.Method m : methods) {
            if ((m.getName().startsWith("get") || m.getName().startsWith("is") )&&  (!m.getName().equals("getClass")) &&  m.getParameterTypes().length == 0) {
                System.out.println(
                    "assertEquals(\"xxx\"," + object + "." + m.getName() + "()); "
                );
            }
        }

    }

Make sure chromedriver can run with Java selenium code, both in a real machine and in docker

Install chromedriver

You want to make the following line of code run successfully.

ChromeDriver driver = new ChromeDriver();

In windows, you need to download chrome driver from https://chromedriver.chromium.org/downloads and put it in system’s path (An alternative is to set the executable file’s path as a system property “webdriver.chrome.driver”, not beautiful)

In *nix or linux-based Docker, you can

apt update
apt install chromedriver -y   #for debian-based systems 

Options needed for ChromeDriver

        ChromeDriverService chromeDriverService = new ChromeDriverService.Builder().withWhitelistedIps("").build();//without this you may see " bind() returned an error, errno=99: Cannot assign requested address (99) "

...
        // without the following you may see "DevToolsActivePort file doesn't exist" 
        ChromeOptions chromeOptions = new ChromeOptions();
        if(isUnixBased()) {
            chromeOptions.addArguments("--headless");
            chromeOptions.addArguments("--no-sandbox");
            chromeOptions.addArguments("--disable-dev-shm-usage");
        }

...
        ChromeDriver driver = new ChromeDriver(chromeDriverService, chromeOptions);