If I understand well, you want to Join ScheduleRequest
with User
and apply the in
clause to the userName
property of the entity User
.
I'd need to work a bit on this schema. But you can try with this trick, that is much more readable than the code you posted, and avoids the Join
part (because it handles the Join
logic outside the Criteria Query).
List<String> myList = new ArrayList<String> ();
for (User u : usersList) {
myList.add(u.getUsername());
}
Expression<String> exp = scheduleRequest.get("createdBy");
Predicate predicate = exp.in(myList);
criteria.where(predicate);
In order to write more type-safe code you could also use Metamodel by replacing this line:
Expression<String> exp = scheduleRequest.get("createdBy");
with this:
Expression<String> exp = scheduleRequest.get(ScheduleRequest_.createdBy);
If it works, then you may try to add the Join
logic into the Criteria Query
. But right now I can't test it, so I prefer to see if somebody else wants to try.
Not a perfect answer though may be code snippets might help.
public <T> List<T> findListWhereInCondition(Class<T> clazz,
String conditionColumnName, Serializable... conditionColumnValues) {
QueryBuilder<T> queryBuilder = new QueryBuilder<T>(clazz);
addWhereInClause(queryBuilder, conditionColumnName,
conditionColumnValues);
queryBuilder.select();
return queryBuilder.getResultList();
}
private <T> void addWhereInClause(QueryBuilder<T> queryBuilder,
String conditionColumnName, Serializable... conditionColumnValues) {
Path<Object> path = queryBuilder.root.get(conditionColumnName);
In<Object> in = queryBuilder.criteriaBuilder.in(path);
for (Serializable conditionColumnValue : conditionColumnValues) {
in.value(conditionColumnValue);
}
queryBuilder.criteriaQuery.where(in);
}
If you set id in your database to be primary key and autoincrement, then this line of code is wrong:
user.setId(1);
Try with this:
public static void main(String[] args){
UserBean user = new UserBean();
user.setUserName("name1");
user.setPassword("passwd1");
em.persist(user);
}
JPA is indisputably a great simplification in the domain of enterprise applications built on the Java platform. As a developer who had to cope up with the intricacies of the old entity beans in J2EE I see the inclusion of JPA among the Java EE specifications as a big leap forward. However, while delving deeper into the JPA details I find things that are not so easy. In this article I deal with comparison of the EntityManager’s merge and persist methods whose overlapping behavior may cause confusion not only to a newbie. Furthermore I propose a generalization that sees both methods as special cases of a more general method combine.
Persisting entities
In contrast to the merge method the persist method is pretty straightforward and intuitive. The most common scenario of the persist method's usage can be summed up as follows:
"A newly created instance of the entity class is passed to the persist method. After this method returns, the entity is managed and planned for insertion into the database. It may happen at or before the transaction commits or when the flush method is called. If the entity references another entity through a relationship marked with the PERSIST cascade strategy this procedure is applied to it also."
The specification goes more into details, however, remembering them is not crucial as these details cover more or less exotic situations only.
Merging entities
In comparison to persist, the description of the merge's behavior is not so simple. There is no main scenario, as it is in the case of persist, and a programmer must remember all scenarios in order to write a correct code. It seems to me that the JPA designers wanted to have some method whose primary concern would be handling detached entities (as the opposite to the persist method that deals with newly created entities primarily.) The merge method's major task is to transfer the state from an unmanaged entity (passed as the argument) to its managed counterpart within the persistence context. This task, however, divides further into several scenarios which worsen the intelligibility of the overall method's behavior.
Instead of repeating paragraphs from the JPA specification I have prepared a flow diagram that schematically depicts the behaviour of the merge method:
So, when should I use persist and when merge?
persist
merge
If you have any NamedQueries in your entity classes, then check the stack trace for compilation errors. A malformed query which cannot be compiled can cause failure to load the persistence context.
Can em.flush() cause any harm when using it within a transaction?
Yes, it may hold locks in the database for a longer duration than necessary.
Generally, When using JPA you delegates the transaction management to the container (a.k.a CMT - using @Transactional annotation on business methods) which means that a transaction is automatically started when entering the method and commited / rolled back at the end. If you let the EntityManager handle the database synchronization, sql statements execution will be only triggered just before the commit, leading to short lived locks in database. Otherwise your manually flushed write operations may retain locks between the manual flush and the automatic commit which can be long according to remaining method execution time.
Notes that some operation automatically triggers a flush : executing a native query against the same session (EM state must be flushed to be reachable by the SQL query), inserting entities using native generated id (generated by the database, so the insert statement must be triggered thus the EM is able to retrieve the generated id and properly manage relationships)
JPQL mostly is case-insensitive. One of the things that is case-sensitive is Java entity names. Change your query to:
"SELECT r FROM FooBar r"
you can also use a namedQuery named findAll for all your entities and call it in your generic FindAll with
entityManager.createNamedQuery(persistentClass.getSimpleName()+"findAll").getResultList();
Both FetchType.LAZY
and FetchType.EAGER
are used to define the default fetch plan.
Unfortunately, you can only override the default fetch plan for LAZY fetching. EAGER fetching is less flexible and can lead to many performance issues.
My advice is to restrain the urge of making your associations EAGER because fetching is a query-time responsibility. So all your queries should use the fetch directive to only retrieve what's necessary for the current business case.
I had the same exception, caused when attempting to remove the kid from the person (Person - OneToMany - Kid). On Person side annotation:
@OneToMany(fetch = FetchType.EAGER, orphanRemoval = true, ... cascade = CascadeType.ALL)
public Set<Kid> getKids() { return kids; }
On Kid side annotation:
@ManyToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "person_id")
public Person getPerson() { return person; }
So solution was to remove cascade = CascadeType.ALL
, just simple: @ManyToOne
on the Kid class and it started to work as expected.
I don't know if I am late or out of scope but in my opinion I could do it like:
String orgName = "anyParamValue";
Query q = em.createQuery("Select O from Organization O where O.orgName LIKE '%:orgName%'");
q.setParameter("orgName", orgName);
for using this, you can create a Repository for example this one:
Member findByEmail(String email);
List<Member> findByDate(Date date);
// custom query example and return a member
@Query("select m from Member m where m.username = :username and m.password=:password")
Member findByUsernameAndPassword(@Param("username") String username, @Param("password") String password);
The entity which has the table with foreign key in the database is the owning entity and the other table, being pointed at, is the inverse entity.
@Entity
@NamedQuery(name = "Customer.listUniqueNames",
query = "SELECT DISTINCT c.name FROM Customer c")
public class Customer {
...
private String name;
public static List<String> listUniqueNames() {
return = getEntityManager().createNamedQuery(
"Customer.listUniqueNames", String.class)
.getResultList();
}
}
public List<Student> findStudentByReports(Date startDate, Date endDate) {
System.out.println("call findStudentMethd******************with this pattern"
+ startDate
+ endDate
+ "*********************************************");
return em
.createQuery(
"' select attendence from Attendence attendence where attendence.admissionDate BETWEEN : startDate '' AND endDate ''"
+ "'")
.setParameter("startDate", startDate, TemporalType.DATE)
.setParameter("endDate", endDate, TemporalType.DATE)
.getResultList();
}
I find this lecture very valuable https://vimeo.com/190275665, in point 3 it summarizes these generators and also gives some performance analysis and guideline one when you use each one.
If you use the @OneToMany
annotation with @JoinColumn
, then you have a unidirectional association, like the one between the parent Post
entity and the child PostComment
in the following diagram:
When using a unidirectional one-to-many association, only the parent side maps the association.
In this example, only the Post
entity will define a @OneToMany
association to the child PostComment
entity:
@OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
@JoinColumn(name = "post_id")
private List<PostComment> comments = new ArrayList<>();
If you use the @OneToMany
with the mappedBy
attribute set, you have a bidirectional association. In our case, both the Post
entity has a collection of PostComment
child entities, and the child PostComment
entity has a reference back to the parent Post
entity, as illustrated by the following diagram:
In the PostComment
entity, the post
entity property is mapped as follows:
@ManyToOne(fetch = FetchType.LAZY)
private Post post;
The reason we explicitly set the
fetch
attribute toFetchType.LAZY
is because, by default, all@ManyToOne
and@OneToOne
associations are fetched eagerly, which can cause N+1 query issues.
In the Post
entity, the comments
association is mapped as follows:
@OneToMany(
mappedBy = "post",
cascade = CascadeType.ALL,
orphanRemoval = true
)
private List<PostComment> comments = new ArrayList<>();
The mappedBy
attribute of the @OneToMany
annotation references the post
property in the child PostComment
entity, and, this way, Hibernate knows that the bidirectional association is controlled by the @ManyToOne
side, which is in charge of managing the Foreign Key column value this table relationship is based on.
For a bidirectional association, you also need to have two utility methods, like addChild
and removeChild
:
public void addComment(PostComment comment) {
comments.add(comment);
comment.setPost(this);
}
public void removeComment(PostComment comment) {
comments.remove(comment);
comment.setPost(null);
}
These two methods ensure that both sides of the bidirectional association are in sync. Without synchronizing both ends, Hibernate does not guarantee that association state changes will propagate to the database.
The unidirectional @OneToMany
association does not perform very well, so you should avoid it.
You are better off using the bidirectional @OneToMany
which is more efficient.
i thinking about this and i choose method accesor
why?
because field and methos accesor is the same but if later i need some logic in load field, i save move all annotation placed in fields
regards
Grubhart
Join on one-to-many relation in JPQL looks as follows:
select b.fname, b.lname from Users b JOIN b.groups c where c.groupName = :groupName
When several properties are specified in select
clause, result is returned as Object[]
:
Object[] temp = (Object[]) em.createNamedQuery("...")
.setParameter("groupName", groupName)
.getSingleResult();
String fname = (String) temp[0];
String lname = (String) temp[1];
By the way, why your entities are named in plural form, it's confusing. If you want to have table names in plural, you may use @Table
to specify the table name for the entity explicitly, so it doesn't interfere with reserved words:
@Entity @Table(name = "Users")
public class User implements Serializable { ... }
Here is a DB2 Stored Procidure that receive a parameter
CREATE PROCEDURE getStateByName (IN StateName VARCHAR(128))
DYNAMIC RESULT SETS 1
P1: BEGIN
-- Declare cursor
DECLARE State_Cursor CURSOR WITH RETURN for
-- #######################################################################
-- # Replace the SQL statement with your statement.
-- # Note: Be sure to end statements with the terminator character (usually ';')
-- #
-- # The example SQL statement SELECT NAME FROM SYSIBM.SYSTABLES
-- # returns all names from SYSIBM.SYSTABLES.
-- ######################################################################
SELECT * FROM COUNTRY.STATE
WHERE PROVINCE_NAME LIKE UPPER(stateName);
-- Cursor left open for client application
OPEN Province_Cursor;
END P1
//Country is a db2 scheme
//Now here is a java Entity bean Method
public List<Province> getStateByName(String stateName) throws Exception {
EntityManager em = this.em;
List<State> states= null;
try {
Query query = em.createNativeQuery("call NGB.getStateByName(?1)", Province.class);
query.setParameter(1, provinceName);
states= (List<Province>) query.getResultList();
} catch (Exception ex) {
throw ex;
}
return states;
}
For any method in a Spring CrudRepository you should be able to specify the @Query yourself. Something like this should work:
@Query( "select o from MyObject o where inventoryId in :ids" )
List<MyObject> findByInventoryIds(@Param("ids") List<Long> inventoryIdList);
@Entity
class Employee {
@OneToOne(orphanRemoval=true)
private Address address;
}
See here.
You can also add
<Resource
auth="Container"
driverClassName="org.apache.derby.jdbc.EmbeddedDriver"
maxActive="20"
maxIdle="10"
maxWait="-1"
name="ds/flexeraDS"
type="javax.sql.DataSource"
url="jdbc:derby:flexeraDB;create=true"
/>
under META-INF/context.xml
file (This will be only at application level).
The EntityManager.flush()
operation can be used the write all changes to the database before the transaction is committed. By default JPA does not normally write changes to the database until the transaction is committed. This is normally desirable as it avoids database access, resources and locks until required. It also allows database writes to be ordered, and batched for optimal database access, and to maintain integrity constraints and avoid deadlocks. This means that when you call persist, merge, or remove the database DML INSERT
, UPDATE
, DELETE
is not executed, until commit, or until a flush is triggered.
To complete the above answers, I had the case using an XML mapping file where neither the @Transient
nor transient
worked...
I had to put the transient information in the xml file:
<attributes>
(...)
<transient name="field" />
</attributes>
I was able to create an EntityManager
with Hibernate and PostgreSQL purely using Java code (with a Spring configuration) the following:
@Bean
public DataSource dataSource() {
final PGSimpleDataSource dataSource = new PGSimpleDataSource();
dataSource.setDatabaseName( "mytestdb" );
dataSource.setUser( "myuser" );
dataSource.setPassword("mypass");
return dataSource;
}
@Bean
public Properties hibernateProperties(){
final Properties properties = new Properties();
properties.put( "hibernate.dialect", "org.hibernate.dialect.PostgreSQLDialect" );
properties.put( "hibernate.connection.driver_class", "org.postgresql.Driver" );
properties.put( "hibernate.hbm2ddl.auto", "create-drop" );
return properties;
}
@Bean
public EntityManagerFactory entityManagerFactory( DataSource dataSource, Properties hibernateProperties ){
final LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource( dataSource );
em.setPackagesToScan( "net.initech.domain" );
em.setJpaVendorAdapter( new HibernateJpaVendorAdapter() );
em.setJpaProperties( hibernateProperties );
em.setPersistenceUnitName( "mytestdomain" );
em.setPersistenceProviderClass(HibernatePersistenceProvider.class);
em.afterPropertiesSet();
return em.getObject();
}
The call to LocalContainerEntityManagerFactoryBean.afterPropertiesSet()
is essential since otherwise the factory never gets built, and then getObject()
returns null
and you are chasing after NullPointerException
s all day long. >:-(
It then worked with the following code:
PageEntry pe = new PageEntry();
pe.setLinkName( "Google" );
pe.setLinkDestination( new URL( "http://www.google.com" ) );
EntityTransaction entTrans = entityManager.getTransaction();
entTrans.begin();
entityManager.persist( pe );
entTrans.commit();
Where my entity was this:
@Entity
@Table(name = "page_entries")
public class PageEntry {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private long id;
private String linkName;
private URL linkDestination;
// gets & setters omitted
}
It is there to specify another column as the default id column of the other table, e.g. consider the following
TableA
id int identity
tableb_key varchar
TableB
id int identity
key varchar unique
// in class for TableA
@JoinColumn(name="tableb_key", referencedColumnName="key")
After digging into hibernate source code and Below configuration goes to Oracle db for the next value after 50 inserts. So make your INST_PK_SEQ increment 50 each time it is called.
Hibernate 5 is used for below strategy
Check also below http://docs.jboss.org/hibernate/orm/5.1/userguide/html_single/Hibernate_User_Guide.html#identifiers-generators-sequence
@Id
@Column(name = "ID")
@GenericGenerator(name = "INST_PK_SEQ",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
@org.hibernate.annotations.Parameter(
name = "optimizer", value = "pooled-lo"),
@org.hibernate.annotations.Parameter(
name = "initial_value", value = "1"),
@org.hibernate.annotations.Parameter(
name = "increment_size", value = "50"),
@org.hibernate.annotations.Parameter(
name = SequenceStyleGenerator.SEQUENCE_PARAM, value = "INST_PK_SEQ"),
}
)
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "INST_PK_SEQ")
private Long id;
Java's transient
keyword is used to denote that a field is not to be serialized, whereas JPA's @Transient
annotation is used to indicate that a field is not to be persisted in the database, i.e. their semantics are different.
query.setParameterList("name", new String[] { "Ron", "Som", "Roxi"}); fixed my issue
JDBC is the predecessor of JPA.
JDBC is a bridge between the Java world and the databases world. In JDBC you need to expose all dirty details needed for CRUD operations, such as table names, column names, while in JPA (which is using JDBC underneath), you also specify those details of database metadata, but with the use of Java annotations.
So JPA creates update queries for you and manages the entities that you looked up or created/updated (it does more as well).
If you want to do JPA without a Java EE container, then Spring and its libraries may be used with the very same Java annotations.
CriteriaBuilder criteriaBuilder = em.getCriteriaBuilder();
CriteriaQuery<Employee> criteriaQuery = criteriaBuilder.createQuery(Employee.class);
Root<Employee> empleoyeeRoot = criteriaQuery.from(Employee.class);
Subquery<Project> projectSubquery = criteriaQuery.subquery(Project.class);
Root<Project> projectRoot = projectSubquery.from(Project.class);
projectSubquery.select(projectRoot);
Expression<String> stringExpression = empleoyeeRoot.get(Employee_.ID);
Predicate predicateIn = stringExpression.in(projectSubquery);
criteriaQuery.select(criteriaBuilder.count(empleoyeeRoot)).where(predicateIn);
This worked for me :
import org.springframework.format.annotation.DateTimeFormat;
import org.springframework.format.annotation.DateTimeFormat.ISO;
@Column(name="end_date", nullable = false)
@DateTimeFormat(iso = ISO.DATE_TIME)
@JsonFormat(pattern = "yyyy-MM-dd HH:mm")
private LocalDateTime endDate;
For the record, the spring.jpa.hibernate.ddl-auto
property is Spring Data JPA specific and is their way to specify a value that will eventually be passed to Hibernate under the property it knows, hibernate.hbm2ddl.auto
.
The values create
, create-drop
, validate
, and update
basically influence how the schema tool management will manipulate the database schema at startup.
For example, the update
operation will query the JDBC driver's API to get the database metadata and then Hibernate compares the object model it creates based on reading your annotated classes or HBM XML mappings and will attempt to adjust the schema on-the-fly.
The update
operation for example will attempt to add new columns, constraints, etc but will never remove a column or constraint that may have existed previously but no longer does as part of the object model from a prior run.
Typically in test case scenarios, you'll likely use create-drop
so that you create your schema, your test case adds some mock data, you run your tests, and then during the test case cleanup, the schema objects are dropped, leaving an empty database.
In development, it's often common to see developers use update
to automatically modify the schema to add new additions upon restart. But again understand, this does not remove a column or constraint that may exist from previous executions that is no longer necessary.
In production, it's often highly recommended you use none
or simply don't specify this property. That is because it's common practice for DBAs to review migration scripts for database changes, particularly if your database is shared across multiple services and applications.
I'm not sure this solution is under the spec but I think I can share for others.
Contains entity classes only. No META-INF/persistence.xml
.
Depends on my-entities
. Contains EJBs only.
Depends on my-services
. Contains resource classes and META-INF/persistence.xml
.
<jar-file/>
element in my-resources
as the version-postfixed artifact name of a transient dependency?<jar-file/>
element's value and the actual transient dependency's one?I put a property and a dependency in my-resources/pom.xml
.
<properties>
<my-entities.version>x.y.z-SNAPSHOT</my-entities.version>
</properties>
<dependencies>
<dependency>
<!-- this is actually a transitive dependency -->
<groupId>...</groupId>
<artifactId>my-entities</artifactId>
<version>${my-entities.version}</version>
<scope>compile</scope> <!-- other values won't work -->
</dependency>
<dependency>
<groupId>...</groupId>
<artifactId>my-services</artifactId>
<version>some.very.sepecific</version>
<scope>compile</scope>
</dependency>
<dependencies>
Now get the persistence.xml
ready for being filtered
<?xml version="1.0" encoding="UTF-8"?>
<persistence ...>
<persistence-unit name="myPU" transaction-type="JTA">
...
<jar-file>lib/my-entities-${my-entities.version}.jar</jar-file>
...
</persistence-unit>
</persistence>
With the dependencyConvergence
rule, we can assure that the my-entities
' version is same in both direct and transitive.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce</id>
<configuration>
<rules>
<dependencyConvergence/>
</rules>
</configuration>
<goals>
<goal>enforce</goal>
</goals>
</execution>
</executions>
</plugin>
No, you don't need to make criteria query it would be boilerplate code you just do simple thing if you working in Spring-boot: in your repo declare a method name with findBy[exact field name]. Example- if your model or document consist a string field myField and you want to find by it then your method name will be:
findBymyField(String myField);
public List<Model> getAllData(Pageable pageable){
List<Model> models= new ArrayList<>();
modelRepository.findAllByOrderByIdDesc(pageable).forEach(models::add);
return models;
}
To even do better boolean mapping to Y/N, add to your hibernate configuration:
<!-- when using type="yes_no" for booleans, the line below allow booleans in HQL expressions: -->
<property name="hibernate.query.substitutions">true 'Y', false 'N'</property>
Now you can use booleans in HQL, for example:
"FROM " + SomeDomainClass.class.getName() + " somedomainclass " +
"WHERE somedomainclass.someboolean = false"
You need to pass the parameters to the stored procedure.
It should work like this:
List result = em
.createNativeQuery("call getEmployeeDetails(:employeeId,:companyId)")
.setParameter("emplyoyeeId", 123L)
.setParameter("companyId", 456L)
.getResultList();
Update:
Or maybe it shouldn't.
In the Book EJB3 in Action, it says on page 383, that JPA does not support stored procedures (page is only a preview, you don't get the full text, the entire book is available as a download in several places including this one, I don't know if this is legal though).
Anyway, the text is this:
If you’re a big fan of SQL, you may be willing to exploit the power of database stored procedures. Unfortunately, JPA doesn’t support stored procedures, and you have to depend on a proprietary feature of your persistence provider. However, you can use simple stored functions (without out parameters) with a native SQL query.
Make sure that you have enter valid detail in application.properties and whether your database server is available. As a example when you are connecting with MySQL check whether XAMPP is running properly.
You are experiencing this issue for two reasons.
When performing a join in JPQL you must ensure that an underlying association between the entities attempting to be joined exists. In your example, you are missing an association between the User and Area entities. In order to create this association we must add an Area field within the User class and establish the appropriate JPA Mapping. I have attached the source for User below. (Please note I moved the mappings to the fields)
User.java
@Entity
@Table(name="user")
public class User {
@Id
@GeneratedValue(strategy=GenerationType.AUTO)
@Column(name="iduser")
private Long idUser;
@Column(name="user_name")
private String userName;
@OneToOne()
@JoinColumn(name="idarea")
private Area area;
public Long getIdUser() {
return idUser;
}
public void setIdUser(Long idUser) {
this.idUser = idUser;
}
public String getUserName() {
return userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
public Area getArea() {
return area;
}
public void setArea(Area area) {
this.area = area;
}
}
Once this relationship is established you can reference the area object in your @Query declaration. The query specified in your @Query annotation must follow proper syntax, which means you should omit the on clause. See the following:
@Query("select u.userName from User u inner join u.area ar where ar.idArea = :idArea")
While looking over your question I also made the relationship between the User and Area entities bidirectional. Here is the source for the Area entity to establish the bidirectional relationship.
Area.java
@Entity
@Table(name = "area")
public class Area {
@Id
@GeneratedValue(strategy=GenerationType.AUTO)
@Column(name="idarea")
private Long idArea;
@Column(name="area_name")
private String areaName;
@OneToOne(fetch=FetchType.LAZY, mappedBy="area")
private User user;
public Long getIdArea() {
return idArea;
}
public void setIdArea(Long idArea) {
this.idArea = idArea;
}
public String getAreaName() {
return areaName;
}
public void setAreaName(String areaName) {
this.areaName = areaName;
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
}
One of the JPA ways for getting only particular columns is to ask for a Tuple object.
In your case you would need to write something like this:
CriteriaQuery<Tuple> cq = builder.createTupleQuery();
// write the Root, Path elements as usual
Root<EntityClazz> root = cq.from(EntityClazz.class);
cq.multiselect(root.get(EntityClazz_.ID), root.get(EntityClazz_.VERSION)); //using metamodel
List<Tuple> tupleResult = em.createQuery(cq).getResultList();
for (Tuple t : tupleResult) {
Long id = (Long) t.get(0);
Long version = (Long) t.get(1);
}
Another approach is possible if you have a class representing the result, like T
in your case. T
doesn't need to be an Entity class. If T
has a constructor like:
public T(Long id, Long version)
then you can use T
directly in your CriteriaQuery
constructor:
CriteriaQuery<T> cq = builder.createQuery(T.class);
// write the Root, Path elements as usual
Root<EntityClazz> root = cq.from(EntityClazz.class);
cq.multiselect(root.get(EntityClazz_.ID), root.get(EntityClazz_.VERSION)); //using metamodel
List<T> result = em.createQuery(cq).getResultList();
See this link for further reference.
The another workaround is to call
Hibernate.initialize(extractedObject.getSubojbectToUnproxy());
Just before closing the session.
I think replacing @ComponentScan
with @ComponentScan("com.nervy.dialer.domain")
will work.
Edit :
I have added a sample application to demonstrate how to set up a pooled datasource connection with BoneCP.
The application has the same structure with yours. I hope this will help you to resolve your configuration problems
According to me, Spring doesn't handle all the cases with ease. In your case the following should do the trick
Page<QueuedBook> findByBookIdRegion(Region region, Pageable pageable);
or
Page<QueuedBook> findByBookId_Region(Region region, Pageable pageable);
However, it also depends on the naming convention of fields that you have in your @Embeddable
class,
e.g. the following field might not work in any of the styles that mentioned above
private String cRcdDel;
I tried with both the cases (as follows) and it didn't work (it seems like Spring doesn't handle this type of naming conventions(i.e. to many Caps , especially in the beginning - 2nd letter (not sure about if this is the only case though)
Page<QueuedBook> findByBookIdCRcdDel(String cRcdDel, Pageable pageable);
or
Page<QueuedBook> findByBookIdCRcdDel(String cRcdDel, Pageable pageable);
When I renamed column to
private String rcdDel;
my following solutions work fine without any issue:
Page<QueuedBook> findByBookIdRcdDel(String rcdDel, Pageable pageable);
OR
Page<QueuedBook> findByBookIdRcdDel(String rcdDel, Pageable pageable);
I usually use getReference method when i do not need to access database state (I mean getter method). Just to change state (I mean setter method). As you should know, getReference returns a proxy object which uses a powerful feature called automatic dirty checking. Suppose the following
public class Person {
private String name;
private Integer age;
}
public class PersonServiceImpl implements PersonService {
public void changeAge(Integer personId, Integer newAge) {
Person person = em.getReference(Person.class, personId);
// person is a proxy
person.setAge(newAge);
}
}
If i call find method, JPA provider, behind the scenes, will call
SELECT NAME, AGE FROM PERSON WHERE PERSON_ID = ?
UPDATE PERSON SET AGE = ? WHERE PERSON_ID = ?
If i call getReference method, JPA provider, behind the scenes, will call
UPDATE PERSON SET AGE = ? WHERE PERSON_ID = ?
And you know why ???
When you call getReference, you will get a proxy object. Something like this one (JPA provider takes care of implementing this proxy)
public class PersonProxy {
// JPA provider sets up this field when you call getReference
private Integer personId;
private String query = "UPDATE PERSON SET ";
private boolean stateChanged = false;
public void setAge(Integer newAge) {
stateChanged = true;
query += query + "AGE = " + newAge;
}
}
So before transaction commit, JPA provider will see stateChanged flag in order to update OR NOT person entity. If no rows is updated after update statement, JPA provider will throw EntityNotFoundException according to JPA specification.
regards,
if you use EclipseLink: You should be in a JPA transaction to access the Connection
entityManager.getTransaction().begin();
java.sql.Connection connection = entityManager.unwrap(java.sql.Connection.class);
...
entityManager.getTransaction().commit();
Just add the this annotation @Temporal(TemporalType.DATE)
for a java.util.Date
field in your entity class.
More information available in this stackoverflow answer.
Turns out, when the table name is different than the model name, you have to change the annotations to:
@Entity
@Table(name = "table_name")
class WhateverNameYouWant {
...
Instead of simply using the @Entity annotation.
What was weird for me, is that the class it was trying to convert to didn't exist. This worked for me.
I had exactly this problem with the exception that the Person class had a embedded key class. My own solution was to join them in the query AND remove
@Fetch(FetchMode.JOIN)
My embedded id class:
@Embeddable
public class MessageRecipientId implements Serializable {
@ManyToOne(targetEntity = Message.class, fetch = FetchType.LAZY)
@JoinColumn(name="messageId")
private Message message;
private String governmentId;
public MessageRecipientId() {
}
public Message getMessage() {
return message;
}
public void setMessage(Message message) {
this.message = message;
}
public String getGovernmentId() {
return governmentId;
}
public void setGovernmentId(String governmentId) {
this.governmentId = governmentId;
}
public MessageRecipientId(Message message, GovernmentId governmentId) {
this.message = message;
this.governmentId = governmentId.getValue();
}
}
I am not sure if this will help your situation (that is if it stills exists), however, after scouring the web for a similar issue.
I was creating a native query from a persistence EntityManager to perform an update.
Query query = entityManager.createNativeQuery(queryString);
I was receiving the following error:
caused by: javax.persistence.TransactionRequiredException: Executing an update/delete query
Many solutions suggest adding @Transactional to your method. Just doing this did not change the error.
Some solutions suggest asking the EntityManager for a EntityTransaction
so that you can call begin and commit yourself.
This throws another error:
caused by: java.lang.IllegalStateException: Not allowed to create transaction on shared EntityManager - use Spring transactions or EJB CMT instead
I then tried a method which most sites say is for use application managed entity managers and not container managed (which I believe Spring is) and that was joinTransaction()
.
Having @Transactional
decorating the method and then calling joinTransaction()
on EntityManager object just prior to calling query.executeUpdate()
and my native query update worked.
I hope this helps someone else experiencing this issue.
Here's a typed/generics version, based on Rodrigo IronMan's implementation:
public static <T> T getSingleResultOrNull(TypedQuery<T> query) {
query.setMaxResults(1);
List<T> list = query.getResultList();
if (list.isEmpty()) {
return null;
}
return list.get(0);
}
The primary key class must define equals and hashCode methods
Take a look at http://start.spring.io/ it basically gives you a kick starter with either maven or gradle build.
Note: This is a Spring Boot based archetype.
This should be working too using JPA 2.0 @MapsId annotation instead of Hibernate's GenericGenerator:
@Entity
public class Person {
@Id
@GeneratedValue
public int id;
@OneToOne
@PrimaryKeyJoinColumn
public OtherInfo otherInfo;
rest of attributes ...
}
@Entity
public class OtherInfo {
@Id
public int id;
@MapsId
@OneToOne
@JoinColumn(name="id")
public Person person;
rest of attributes ...
}
More details on this in Hibernate 4.1 documentation under section 5.1.2.2.7.
Relationships in JPA are always unidirectional, unless you associate the parent with the child in both directions. Cascading REMOVE operations from the parent to the child will require a relation from the parent to the child (not just the opposite).
You'll therefore need to do this:
@ManyToOne
relationship to a bi-directional @ManyToOne
, or a unidirectional @OneToMany
. You can then cascade REMOVE operations so that EntityManager.remove
will remove the parent and the children. You can also specify orphanRemoval
as true, to delete any orphaned children when the child entity in the parent collection is set to null, i.e. remove the child when it is not present in any parent's collection.ON DELETE CASCADE
. You'll need to invoke EntityManager.clear()
after calling EntityManager.remove(parent)
as the persistence context needs to be refreshed - the child entities are not supposed to exist in the persistence context after they've been deleted in the database.An other example would be on the "created_on" column where you want to let the database handle the date creation
Group
entity, as currently the User
is the owner.groups
and users
are not combined with each other. I mean, after deleting User1 instance from Group1.users, the User1.groups collections is not changed automatically (which is quite surprising for me),User
is the owner. Then when deleting a user the relation user-group will be updated automatically. But when deleting a group you have to take care of deleting the relation yourself like this:entityManager.remove(group)
for (User user : group.users) {
user.groups.remove(group);
}
...
// then merge() and flush()
I know that JPA entities must have primary key but I can't change database structure due to reasons beyond my control.
More precisely, a JPA entity must have some Id
defined. But a JPA Id
does not necessarily have to be mapped on the table primary key (and JPA can somehow deal with a table without a primary key or unique constraint).
Is it possible to create JPA (Hibernate) entities that will be work with database structure like this?
If you have a column or a set of columns in the table that makes a unique value, you can use this unique set of columns as your Id
in JPA.
If your table has no unique columns at all, you can use all of the columns as the Id
.
And if your table has some id but your entity doesn't, make it an Embeddable
.
I know this is an old question and it has already been answered, but here's another approach:
@Query("select new map(count(v) as cnt, v.answer) from Survey v group by v.answer")
public List<?> findSurveyCount();
Since the SERVICE_USER table is not a pure join table, but has additional functional fields (blocked), you must map it as an entity, and decompose the many to many association between User and Service into two OneToMany associations : One User has many UserServices, and one Service has many UserServices.
You haven't shown us the most important part : the mapping and initialization of the relationships between your entities (i.e. the part you have problems with). So I'll show you how it should look like.
If you make the relationships bidirectional, you should thus have
class User {
@OneToMany(mappedBy = "user")
private Set<UserService> userServices = new HashSet<UserService>();
}
class UserService {
@ManyToOne
@JoinColumn(name = "user_id")
private User user;
@ManyToOne
@JoinColumn(name = "service_code")
private Service service;
@Column(name = "blocked")
private boolean blocked;
}
class Service {
@OneToMany(mappedBy = "service")
private Set<UserService> userServices = new HashSet<UserService>();
}
If you don't put any cascade on your relationships, then you must persist/save all the entities. Although only the owning side of the relationship (here, the UserService side) must be initialized, it's also a good practice to make sure both sides are in coherence.
User user = new User();
Service service = new Service();
UserService userService = new UserService();
user.addUserService(userService);
userService.setUser(user);
service.addUserService(userService);
userService.setService(service);
session.save(user);
session.save(service);
session.save(userService);
The issue is that you are not able to get a connection to MYSQL database and hence it is throwing an error saying that cannot build a session factory.
Please see the error below:
Caused by: java.sql.SQLException: Access denied for user ''@'localhost' (using password: NO)
which points to username
not getting populated
.
Please recheck system properties
dataSource.setUsername(System.getProperty("root"));
some packages seems to be missing as well pointing to a dependency issue:
package org.gjt.mm.mysql does not exist
Please run a mvn dependency:tree
command to check for dependencies
This occurred for me when persisting an entity in which the existing record in the database had a NULL value for the field annotated with @Version (for optimistic locking). Updating the NULL value to 0 in the database corrected this.
Using @EmbeddableId for the PK entity has solved my issue.
@Entity
@Table(name="SAMPLE")
public class SampleEntity implements Serializable{
private static final long serialVersionUID = 1L;
@EmbeddedId
SampleEntityPK id;
}
Here is the solution for storing a Set using @Converter and StringTokenizer. A bit more checks against @jonck-van-der-kogel solution.
In your Entity class:
@Convert(converter = StringSetConverter.class)
@Column
private Set<String> washSaleTickers;
StringSetConverter:
package com.model.domain.converters;
import javax.persistence.AttributeConverter;
import javax.persistence.Converter;
import java.util.HashSet;
import java.util.Set;
import java.util.StringTokenizer;
@Converter
public class StringSetConverter implements AttributeConverter<Set<String>, String> {
private final String GROUP_DELIMITER = "=IWILLNEVERHAPPEN=";
@Override
public String convertToDatabaseColumn(Set<String> stringList) {
if (stringList == null) {
return new String();
}
return String.join(GROUP_DELIMITER, stringList);
}
@Override
public Set<String> convertToEntityAttribute(String string) {
Set<String> resultingSet = new HashSet<>();
StringTokenizer st = new StringTokenizer(string, GROUP_DELIMITER);
while (st.hasMoreTokens())
resultingSet.add(st.nextToken());
return resultingSet;
}
}
@Entity(name = "someThing")
=> this name will be used to identify the domain ..this name will only be identified by hql
queries ..ie ..name of the domain object
@Table(name = "someThing")
=> this name will be used to which table referred by domain object..ie ..name of the table
You can just return a Boolean like this:
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.jpa.repository.QueryHints;
import org.springframework.data.repository.query.Param;
@QueryHints(@QueryHint(name = org.hibernate.jpa.QueryHints.HINT_FETCH_SIZE, value = "1"))
@Query(value = "SELECT (1=1) FROM MyEntity WHERE ...... :id ....")
Boolean existsIfBlaBla(@Param("id") String id);
Boolean.TRUE.equals(existsIfBlaBla("0815"))
could be a solution
For my case it was due to Intellij IDEA by default set Java 11 as default project SDK, but project was implemented in Java 8. I've changed "Project SDK" in File -> Project Structure -> Project (in Project Settings)
When using IN
with a collection-valued parameter you don't need (...)
:
@NamedQuery(name = "EventLog.viewDatesInclude",
query = "SELECT el FROM EventLog el WHERE el.timeMark >= :dateFrom AND "
+ "el.timeMark <= :dateTo AND "
+ "el.name IN :inclList")
The MyKey class (@Embeddable) should not have any relationships like @ManyToOne
See here for an example from the OpenJPA docs. CascadeType.ALL
means it will do all actions.
Quote:
CascadeType.PERSIST: When persisting an entity, also persist the entities held in its fields. We suggest a liberal application of this cascade rule, because if the EntityManager finds a field that references a new entity during the flush, and the field does not use CascadeType.PERSIST, it is an error.
CascadeType.REMOVE: When deleting an entity, it also deletes the entities held in this field.
CascadeType.REFRESH: When refreshing an entity, also refresh the entities held in this field.
CascadeType.MERGE: When merging entity state, also merge the entities held in this field.
Sebastian
I have seen this error , for me the issue was there was a space in the absolute path of the persistance.xml , removal of the same helped me.
For me, the problem was having nested EAGER fetches.
One solution is to set the nested fields to LAZY and use Hibernate.initialize() to load the nested field(s):
x = session.get(ClassName.class, id);
Hibernate.initialize(x.getNestedField());
Please take a look at jpa-metamodels-with-maven-example.
org.hibernate.org:hibernate-jpamodelgen
.org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor
. <dependency>
<groupId>org.hibernate.orm</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>${version.hibernate-jpamodelgen}</version>
<scope>provided</scope>
</dependency>
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<compilerArguments>-AaddGeneratedAnnotation=false</compilerArguments> <!-- suppress java.annotation -->
<processors>
<processor>org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor</processor>
</processors>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.hibernate.orm</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>${version.hibernate-jpamodelgen}</version>
</dependency>
</dependencies>
</plugin>
org.apache.openjpa:openjpa
.org.apache.openjpa.persistence.meta.AnnotationProcessor6
.<openjpa.metamodel>true<openjpa.metamodel>
. <dependencies>
<dependency>
<groupId>org.apache.openjpa</groupId>
<artifactId>openjpa</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<compilerArgs>
<arg>-Aopenjpa.metamodel=true</arg>
</compilerArgs>
</configuration>
</plugin>
</plugins>
</build>
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<executions>
<execution>
<id>process</id>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<processors>
<processor>org.apache.openjpa.persistence.meta.AnnotationProcessor6</processor>
</processors>
<optionMap>
<openjpa.metamodel>true</openjpa.metamodel>
</optionMap>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.apache.openjpa</groupId>
<artifactId>openjpa</artifactId>
<version>${version.openjpa}</version>
</dependency>
</dependencies>
</plugin>
org.eclipse.persistence:org.eclipse.persistence.jpa.modelgen.processor
.org.eclipse.persistence.internal.jpa.modelgen.CanonicalModelProcessor
.persistence.xml
. <dependencies>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa.modelgen.processor</artifactId>
<scope>provided</scope>
</dependency>
<plugins>
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<processors>
<processor>org.eclipse.persistence.internal.jpa.modelgen.CanonicalModelProcessor</processor>
</processors>
<compilerArguments>-Aeclipselink.persistencexml=src/main/resources-${environment.id}/META-INF/persistence.xml</compilerArguments>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa.modelgen.processor</artifactId>
<version>${version.eclipselink}</version>
</dependency>
</dependencies>
</plugin>
org.datanucleus:datanucleus-jpa-query
.org.datanucleus.jpa.query.JPACriteriaProcessor
. <dependencies>
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-jpa-query</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<executions>
<execution>
<id>process</id>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<processors>
<processor>org.datanucleus.jpa.query.JPACriteriaProcessor</processor>
</processors>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-jpa-query</artifactId>
<version>${version.datanucleus}</version>
</dependency>
</dependencies>
</plugin>
It's also cleaner to use @JoinTable
when an Entity could be the child in several parent/child relationships with different types of parents. To follow up with Behrang's example, imagine a Task can be the child of Project, Person, Department, Study, and Process.
Should the task
table have 5 nullable
foreign key fields? I think not...
The moment you remove a child entity from the collection you will also be removing that child entity from the DB as well. orphanRemoval also implies that you cannot change parents; if there's a department that has employees, once you remove that employee to put it in another deparment, you will have inadvertantly removed that employee from the DB at flush/commit(whichver comes first). The morale is to set orphanRemoval to true so long as you are certain that children of that parent will not migrate to a different parent throughout their existence. Turning on orphanRemoval also automatically adds REMOVE to cascade list.
Please reference :Spring Data JPA @Query, if you are using Spring Data JPA version 2.0.4 and later. Sample like below:
@Query(value = "SELECT u FROM User u ORDER BY id")
Page<User> findAllUsersWithPagination(Pageable pageable);
I had the same problem and a simple solution that I found was:
List<Object[]> results = query.getResultList();
for (Object[] result: results) {
SomeClass something = (SomeClass)result[1];
something.doSomething;
}
I know this is defenitly not the most elegant solution nor is it best practice but it works, at least for me.
Instead of letting the business layer decide how it’s best to fetch all the associations that are needed by the View layer, OSIV (Open Session in View) forces the Persistence Context to stay open so that the View layer can trigger the Proxy initialization, as illustrated by the following diagram.
OpenSessionInViewFilter
calls the openSession
method of the underlying SessionFactory
and obtains a new Session
.Session
is bound to the TransactionSynchronizationManager
.OpenSessionInViewFilter
calls the doFilter
of the javax.servlet.FilterChain
object reference and the request is further processedDispatcherServlet
is called, and it routes the HTTP request to the underlying PostController
.PostController
calls the PostService
to get a list of Post
entities.PostService
opens a new transaction, and the HibernateTransactionManager
reuses the same Session
that was opened by the OpenSessionInViewFilter
.PostDAO
fetches the list of Post
entities without initializing any lazy association.PostService
commits the underlying transaction, but the Session
is not closed because it was opened externally.DispatcherServlet
starts rendering the UI, which, in turn, navigates the lazy associations and triggers their initialization.OpenSessionInViewFilter
can close the Session
, and the underlying database connection is released as well.At first glance, this might not look like a terrible thing to do, but, once you view it from a database perspective, a series of flaws start to become more obvious.
The service layer opens and closes a database transaction, but afterward, there is no explicit transaction going on. For this reason, every additional statement issued from the UI rendering phase is executed in auto-commit mode. Auto-commit puts pressure on the database server because each transaction issues a commit at end, which can trigger a transaction log flush to disk. One optimization would be to mark the Connection
as read-only which would allow the database server to avoid writing to the transaction log.
There is no separation of concerns anymore because statements are generated both by the service layer and by the UI rendering process. Writing integration tests that assert the number of statements being generated requires going through all layers (web, service, DAO) while having the application deployed on a web container. Even when using an in-memory database (e.g. HSQLDB) and a lightweight webserver (e.g. Jetty), these integration tests are going to be slower to execute than if layers were separated and the back-end integration tests used the database, while the front-end integration tests were mocking the service layer altogether.
The UI layer is limited to navigating associations which can, in turn, trigger N+1 query problems. Although Hibernate offers @BatchSize
for fetching associations in batches, and FetchMode.SUBSELECT
to cope with this scenario, the annotations are affecting the default fetch plan, so they get applied to every business use case. For this reason, a data access layer query is much more suitable because it can be tailored to the current use case data fetch requirements.
Last but not least, the database connection is held throughout the UI rendering phase which increases connection lease time and limits the overall transaction throughput due to congestion on the database connection pool. The more the connection is held, the more other concurrent requests are going to wait to get a connection from the pool.
Unfortunately, OSIV (Open Session in View) is enabled by default in Spring Boot, and OSIV is really a bad idea from a performance and scalability perspective.
So, make sure that in the application.properties
configuration file, you have the following entry:
spring.jpa.open-in-view=false
This will disable OSIV so that you can handle the LazyInitializationException
the right way.
Starting with version 2.0, Spring Boot issues a warning when OSIV is enabled by default, so you can discover this problem long before it affects a production system.
Placing @Transient
on getter with private field worked for me.
private String name;
@Transient
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
Yes, although it's full of gotchas, since JPA is a bit peculiar. It's very much worth reading the documentation on injecting JPA EntityManager
and EntityManagerFactory
, without explicit Spring dependencies in your code:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/orm.html#orm-jpa
This allows you to either inject the EntityManagerFactory
, or else inject a thread-safe, transactional proxy of an EntityManager
directly. The latter makes for simpler code, but means more Spring plumbing is required.
Use count(d.ertek)
or count(d.id)
instead of count(d)
. This can be happen when you have composite primary key at your entity.
I have MySQL schema with autogen values. I use strategy=GenerationType.IDENTITY
tag and seems to work fine in MySQL I guess it should work most db engines as well.
CREATE TABLE user (
id bigint NOT NULL auto_increment,
name varchar(64) NOT NULL default '',
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
User.java
:
// mark this JavaBean to be JPA scoped class
@Entity
@Table(name="user")
public class User {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private long id; // primary key (autogen surrogate)
@Column(name="name")
private String name;
public long getId() { return id; }
public void setId(long id) { this.id = id; }
public String getName() { return name; }
public void setName(String name) { this.name=name; }
}
I have encountered this issue!
Luckily, I determine 2 ways and understand some things but the rest is not clear.
Hope someone discuss or support if you know.
List<Person> person = this.PersonRepository.findById(0)
person.setName("Neo");
This.PersonReository.save(person);
In my case, previously I use idea to generate entity by database schema, and the persistence.xml
is automatically generated in src/main/java/META-INF
,and according to https://stackoverflow.com/a/23890419/10701129, I move it to src/main/resources/META-INF
, also marked META-INF as source root. It works for me.
But just simply marking original META-INF(that is, src/main/java/META-INF) as source root, doesn't work, which confuses me.
This is coming from JPA
. In a very simple way:
persist(entity)
should be used with totally new entities, to add them to DB (if entity already exists in DB there will be EntityExistsException throw).
merge(entity)
should be used, to put entity back to persistence context if the entity was detached and was changed.
Spring Data JPA by default looks for an EntityManagerFactory named entityManagerFactory
. Check out this part of the Javadoc of EnableJpaRepositories
or Table 2.1
of the Spring Data JPA documentation.
That means that you either have to rename your emf
bean to entityManagerFactory
or change your Spring configuration to:
<jpa:repositories base-package="your.package" entity-manager-factory-ref="emf" />
(if you are using XML)
or
@EnableJpaRepositories(basePackages="your.package", entityManagerFactoryRef="emf")
(if you are using Java Config)
From the Spring manual.
Spring Boot can auto-configure embedded H2, HSQL, and Derby databases. You don’t need to provide any connection URLs, simply include a build dependency to the embedded database that you want to use.
For example, typical POM dependencies would be:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<scope>runtime</scope>
</dependency>
For me leaving out the spring-boot-starter-data-jpa dependency and just using the spring-boot-starter-jdbc dependency worked like a charm, as long as I had h2 (or hsqldb) included as dependencies.
I have recently evaluated and picked a persistence framework for a java project and my findings are as follows:
What I am seeing is that the support in favour of JDO is primarily:
and the support in favour of JPA is primarily:
I am seeing a lot of pro-JPA posts from JPA developers who have clearly not used JDO/Datanucleus offering weak arguments for not using JDO.
I am also seeing a lot of posts from JDO users who have migrated to JDO and are much happier as a result.
In respect of JPA being more popular, it seems that this is due in part due to RDBMS vendor support rather than it being technically superior. (Sounds like VHS/Betamax to me).
JDO and it's reference implementation Datanucleus is clearly not dead, as shown by Google's adoption of it for GAE and active development on the source-code (http://sourceforge.net/projects/datanucleus/).
I have seen a number of complaints about JDO due to bytecode enhancement, but no explanation yet for why it is bad.
In fact, in a world that is becoming more and more obsessed by NoSQL solutions, JDO (and the datanucleus implementation) seems a much safer bet.
I have just started using JDO/Datanucleus and have it set up so that I can switch easily between using db4o and mysql. It's helpful for rapid development to use db4o and not have to worry too much about the DB schema and then, once the schema is stabilised to deploy to a database. I also feel confident that later on, I could deploy all/part of my application to GAE or take advantage of distributed storage/map-reduce a la hbase /hadoop / cassandra without too much refactoring.
I found the initial hurdle of getting started with Datanucleus a little tricky - The documentation on the datanucleus website is a little hard to get into - the tutorials are not as easily to follow as I would have liked. Having said that, the more detailed documentation on the API and mapping is very good once you get past the initial learning curve.
The answer is, it depends what you want. I would rather have cleaner code, no-vendor-lock-in, more pojo-orientated, nosql options verses more-popular.
If you want the warm fussy feeling that you are doing the same as the majority of other developers/sheep, choose JPA/hibernate. If you want to lead in your field, test drive JDO/Datanucleus and make your own mind up.
while spring.jpa.open-in-view was true, I didn't have any problem with getOne but after setting it to false , i got LazyInitializationException. Then problem was solved by replacing with findById.
Although there is another solution without replacing the getOne method, and that is put @Transactional at method which is calling repository.getOne(id). In this way transaction will exists and session will not be closed in your method and while using entity there would not be any LazyInitializationException.
Are you perhaps using java.sql.Date
? While that has millisecond granularity as a Java class (it is a subclass of java.util.Date
, bad design decision), it will be interpreted by the JDBC driver as a date without a time component. You have to use java.sql.Timestamp
instead.
Possibly close related code of Pascal
@Entity
@Table(name = "AUTHORITY_")
public class Authority implements Serializable {
public enum Right {
READ(100), WRITE(200), EDITOR(300);
private Integer value;
private Right(Integer value) {
this.value = value;
}
// Reverse lookup Right for getting a Key from it's values
private static final Map<Integer, Right> lookup = new HashMap<Integer, Right>();
static {
for (Right item : Right.values())
lookup.put(item.getValue(), item);
}
public Integer getValue() {
return value;
}
public static Right getKey(Integer value) {
return lookup.get(value);
}
};
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "AUTHORITY_ID")
private Long id;
@Column(name = "RIGHT_ID")
private Integer rightId;
public Right getRight() {
return Right.getKey(this.rightId);
}
public void setRight(Right right) {
this.rightId = right.getValue();
}
}
You can set nativeQuery = true
in the @Query
annotation from a Repository
class like this:
public static final String FIND_PROJECTS = "SELECT projectId, projectName FROM projects";
@Query(value = FIND_PROJECTS, nativeQuery = true)
public List<Object[]> findProjects();
Note that you will have to do the mapping yourself though. It's probably easier to just use the regular mapped lookup like this unless you really only need those two values:
public List<Project> findAll()
It's probably worth looking at the Spring data docs as well.
In my case, I modified hibernate-core source code, well, to introduce a new annotation @DefaultValue
:
commit 34199cba96b6b1dc42d0d19c066bd4d119b553d5
Author: Lenik <xjl at 99jsj.com>
Date: Wed Dec 21 13:28:33 2011 +0800
Add default-value ddl support with annotation @DefaultValue.
diff --git a/hibernate-core/src/main/java/org/hibernate/annotations/DefaultValue.java b/hibernate-core/src/main/java/org/hibernate/annotations/DefaultValue.java
new file mode 100644
index 0000000..b3e605e
--- /dev/null
+++ b/hibernate-core/src/main/java/org/hibernate/annotations/DefaultValue.java
@@ -0,0 +1,35 @@
+package org.hibernate.annotations;
+
+import static java.lang.annotation.ElementType.FIELD;
+import static java.lang.annotation.ElementType.METHOD;
+import static java.lang.annotation.RetentionPolicy.RUNTIME;
+
+import java.lang.annotation.Retention;
+
+/**
+ * Specify a default value for the column.
+ *
+ * This is used to generate the auto DDL.
+ *
+ * WARNING: This is not part of JPA 2.0 specification.
+ *
+ * @author ???
+ */
[email protected]({ FIELD, METHOD })
+@Retention(RUNTIME)
+public @interface DefaultValue {
+
+ /**
+ * The default value sql fragment.
+ *
+ * For string values, you need to quote the value like 'foo'.
+ *
+ * Because different database implementation may use different
+ * quoting format, so this is not portable. But for simple values
+ * like number and strings, this is generally enough for use.
+ */
+ String value();
+
+}
diff --git a/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3Column.java b/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3Column.java
index b289b1e..ac57f1a 100644
--- a/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3Column.java
+++ b/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3Column.java
@@ -29,6 +29,7 @@ import org.hibernate.AnnotationException;
import org.hibernate.AssertionFailure;
import org.hibernate.annotations.ColumnTransformer;
import org.hibernate.annotations.ColumnTransformers;
+import org.hibernate.annotations.DefaultValue;
import org.hibernate.annotations.common.reflection.XProperty;
import org.hibernate.cfg.annotations.Nullability;
import org.hibernate.mapping.Column;
@@ -65,6 +66,7 @@ public class Ejb3Column {
private String propertyName;
private boolean unique;
private boolean nullable = true;
+ private String defaultValue;
private String formulaString;
private Formula formula;
private Table table;
@@ -175,7 +177,15 @@ public class Ejb3Column {
return mappingColumn.isNullable();
}
- public Ejb3Column() {
+ public String getDefaultValue() {
+ return defaultValue;
+ }
+
+ public void setDefaultValue(String defaultValue) {
+ this.defaultValue = defaultValue;
+ }
+
+ public Ejb3Column() {
}
public void bind() {
@@ -186,7 +196,7 @@ public class Ejb3Column {
}
else {
initMappingColumn(
- logicalColumnName, propertyName, length, precision, scale, nullable, sqlType, unique, true
+ logicalColumnName, propertyName, length, precision, scale, nullable, sqlType, unique, defaultValue, true
);
log.debug( "Binding column: " + toString());
}
@@ -201,6 +211,7 @@ public class Ejb3Column {
boolean nullable,
String sqlType,
boolean unique,
+ String defaultValue,
boolean applyNamingStrategy) {
if ( StringHelper.isNotEmpty( formulaString ) ) {
this.formula = new Formula();
@@ -217,6 +228,7 @@ public class Ejb3Column {
this.mappingColumn.setNullable( nullable );
this.mappingColumn.setSqlType( sqlType );
this.mappingColumn.setUnique( unique );
+ this.mappingColumn.setDefaultValue(defaultValue);
if(writeExpression != null && !writeExpression.matches("[^?]*\\?[^?]*")) {
throw new AnnotationException(
@@ -454,6 +466,11 @@ public class Ejb3Column {
else {
column.setLogicalColumnName( columnName );
}
+ DefaultValue _defaultValue = inferredData.getProperty().getAnnotation(DefaultValue.class);
+ if (_defaultValue != null) {
+ String defaultValue = _defaultValue.value();
+ column.setDefaultValue(defaultValue);
+ }
column.setPropertyName(
BinderHelper.getRelativePath( propertyHolder, inferredData.getPropertyName() )
diff --git a/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3JoinColumn.java b/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3JoinColumn.java
index e57636a..3d871f7 100644
--- a/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3JoinColumn.java
+++ b/hibernate-core/src/main/java/org/hibernate/cfg/Ejb3JoinColumn.java
@@ -423,6 +424,7 @@ public class Ejb3JoinColumn extends Ejb3Column {
getMappingColumn() != null ? getMappingColumn().isNullable() : false,
referencedColumn.getSqlType(),
getMappingColumn() != null ? getMappingColumn().isUnique() : false,
+ null, // default-value
false
);
linkWithValue( value );
@@ -502,6 +504,7 @@ public class Ejb3JoinColumn extends Ejb3Column {
getMappingColumn().isNullable(),
column.getSqlType(),
getMappingColumn().isUnique(),
+ null, // default-value
false //We do copy no strategy here
);
linkWithValue( value );
Well, this is a hibernate-only solution.
First of all, @Fetch(FetchMode.JOIN)
and @ManyToOne(fetch = FetchType.LAZY)
are antagonistic because @Fetch(FetchMode.JOIN)
is equivalent to the JPA FetchType.EAGER
.
Eager fetching is rarely a good choice, and for predictable behavior, you are better off using the query-time JOIN FETCH
directive:
public interface PlaceRepository extends JpaRepository<Place, Long>, PlaceRepositoryCustom {
@Query(value = "SELECT p FROM Place p LEFT JOIN FETCH p.author LEFT JOIN FETCH p.city c LEFT JOIN FETCH c.state where p.id = :id")
Place findById(@Param("id") int id);
}
public interface CityRepository extends JpaRepository<City, Long>, CityRepositoryCustom {
@Query(value = "SELECT c FROM City c LEFT JOIN FETCH c.state where c.id = :id")
City findById(@Param("id") int id);
}
JpaRepository
The Spring Data JpaRepository
defines the following two methods:
getOne
, which returns an entity proxy that is suitable for setting a @ManyToOne
or @OneToOne
parent association when persisting a child entity.findById
, which returns the entity POJO after running the SELECT statement that loads the entity from the associated tableHowever, in your case, you didn't call either getOne
or findById
:
Person person = personRepository.findOne(1L);
So, I assume the findOne
method is a method you defined in the PersonRepository
. However, the findOne
method is not very useful in your case. Since you need to fetch the Person
along with is roles
collection, it's better to use a findOneWithRoles
method instead.
You can define a PersonRepositoryCustom
interface, as follows:
public interface PersonRepository
extends JpaRepository<Person, Long>, PersonRepositoryCustom {
}
public interface PersonRepositoryCustom {
Person findOneWithRoles(Long id);
}
And define its implementation like this:
public class PersonRepositoryImpl implements PersonRepositoryCustom {
@PersistenceContext
private EntityManager entityManager;
@Override
public Person findOneWithRoles(Long id)() {
return entityManager.createQuery("""
select p
from Person p
left join fetch p.roles
where p.id = :id
""", Person.class)
.setParameter("id", id)
.getSingleResult();
}
}
That's it!
My Answer: All of the following should be overridden (i.e. describe them all within columndefinition
, if appropriate):
length
precision
scale
nullable
unique
i.e. the column DDL will consist of: name
+ columndefinition
and nothing else.
Rationale follows.
Annotation containing the word "Column" or "Table" is purely physical - properties only used to control DDL/DML against database.
Other annotation purely logical - properties used in-memory in java to control JPA processing.
That's why sometimes it appears the optionality/nullability is set twice - once via @Basic(...,optional=true)
and once via @Column(...,nullable=true)
. Former says attribute/association can be null in the JPA object model (in-memory), at flush time; latter says DB column can be null. Usually you'd want them set the same - but not always, depending on how the DB tables are setup and reused.
In your example, length and nullable properties are overridden and redundant.
So, when specifying columnDefinition, what other properties of @Column are made redundant?
In JPA Spec & javadoc:
columnDefinition
definition:
The SQL fragment that is used when generating the DDL for the column.
columnDefinition
default:
Generated SQL to create a column of the inferred type.
The following examples are provided:
@Column(name="DESC", columnDefinition="CLOB NOT NULL", table="EMP_DETAIL")
@Column(name="EMP_PIC", columnDefinition="BLOB NOT NULL")
And, err..., that's it really. :-$ ?!
Does columnDefinition override other properties provided in the same annotation?
The javadoc and JPA spec don't explicity address this - spec's not giving great protection. To be 100% sure, test with your chosen implementation.
The following can be safely implied from examples provided in the JPA spec
name
& table
can be used in conjunction with columnDefinition
, neither are overriddennullable
is overridden/made redundant by columnDefinition
The following can be fairly safely implied from the "logic of the situation" (did I just say that?? :-P ):
length
, precision
, scale
are overridden/made redundant by the columnDefinition
- they are integral to the typeinsertable
and updateable
are provided separately and never included in columnDefinition
, because they control SQL generation in-memory, before it is emmitted to the database.That leaves just the "unique
" property. It's similar to nullable - extends/qualifies the type definition, so should be treated integral to type definition. i.e. should be overridden.
Test My Answer For columns "A" & "B", respectively:
@Column(name="...", table="...", insertable=true, updateable=false,
columndefinition="NUMBER(5,2) NOT NULL UNIQUE"
@Column(name="...", table="...", insertable=false, updateable=true,
columndefinition="NVARCHAR2(100) NULL"
As you have define the id in int type at the database creation, you have to use the same data type in the model class too. And as you have defined the id to auto increment in the database, you have to mention it in the model class by passing value 'GenerationType.AUTO' into the attribute 'strategy' within the annotation @GeneratedValue. Then the code becomes as below.
@Entity
public class Operator{
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private int id;
private String username;
private String password;
private Integer active;
//Getters and setters...
}
Hibernate does not allow for specifying time zones by annotation or any other means. If you use Calendar instead of date, you can implement a workaround using HIbernate property AccessType and implementing the mapping yourself. The more advanced solution is to implement a custom UserType to map your Date or Calendar. Both solutions are explained in my blog post here: http://www.joobik.com/2010/11/mapping-dates-and-time-zones-with.html
Hypothetically, if search landed you on this question then you probably want this:
doReturn(someReturn).when(someObject).doSomething(argThat(argument -> argument.getName().equals("Bob")));
Why? Because like me you value time and you are not going to implement .equals
just for the sake of the single test scenario.
And 99 % of tests fall apart with null returned from Mock and in a reasonable design you would avoid return null
at all costs, use Optional
or move to Kotlin. This implies that verify
does not need to be used that often and ArgumentCaptors are just too tedious to write.
Edit!: Please be sure that you should have both Python installed on your computer.
Maybe my answer is late for you but I can help someone who has the same problem!
You don't have to download both Anaconda
.
If you are using Spyder
and Jupyter
in Anaconda environmen and,
If you have already Anaconda 2 type in Terminal:
python3 -m pip install ipykernel
python3 -m ipykernel install --user
If you have already Anaconda 3 then type in terminal:
python2 -m pip install ipykernel
python2 -m ipykernel install --user
Then before use Spyder
you can choose Python environment like below!
Sometimes only you can see root and your new Python environment, so root is your first anaconda environment!
Also this is Jupyter. You can choose python version like this!
I hope it will help.
Java does not have pointers like C has, but it does allow you to create new objects on the heap which are "referenced" by variables. The lack of pointers is to stop Java programs from referencing memory locations illegally, and also enables Garbage Collection to be automatically carried out by the Java Virtual Machine.
A warning about the Jquery Caret plugin.
It will conflict with the Masked Input plugin (or vice versa). Fortunately the Masked Input plugin includes a caret() function of its own, which you can use very similarly to the Caret plugin for your basic needs - $(element).caret().begin or .end
function getCookie(name) {
var dc = document.cookie;
var prefix = name + "=";
var begin = dc.indexOf("; " + prefix);
if (begin == -1) {
begin = dc.indexOf(prefix);
if (begin != 0) return null;
else{
var oneCookie = dc.indexOf(';', begin);
if(oneCookie == -1){
var end = dc.length;
}else{
var end = oneCookie;
}
return dc.substring(begin, end).replace(prefix,'');
}
}
else
{
begin += 2;
var end = document.cookie.indexOf(";", begin);
if (end == -1) {
end = dc.length;
}
var fixed = dc.substring(begin, end).replace(prefix,'');
}
// return decodeURI(dc.substring(begin + prefix.length, end));
return fixed;
}
Tried @jac function, got some trouble, here's how I edited his function.
Until the day #pragma once
becomes standard (that's not currently a priority for the future standards), I suggest you use it AND use guards, this way:
#ifndef BLAH_H
#define BLAH_H
#pragma once
// ...
#endif
The reasons are :
#pragma once
is not standard, so it is possible that some compiler don't provide the functionality. That said, all major compiler supports it. If a compiler don't know it, at least it will be ignored.#pragma once
, you shouldn't assume that the behavior will be the same on all compiler. The guards will ensure at least that the basic assumption is the same for all compilers that at least implement the needed preprocessor instructions for guards.#pragma once
will speed up compilation (of one cpp) because the compiler will not reopen the file containing this instruction. So having it in a file might help, or not, depending on the compiler. I heard g++ can do the same optimization when guards are detected but it have to be confirmed.Using the two together you get the best of each compiler for this.
Now, if you don't have some automatic script to generate the guards, it might be more convenient to just use #pragma once
. Just know what that means for portable code. (I'm using VAssistX to generate the guards and pragma once quickly)
You should almost always think your code in a portable way (because you don't know what the future is made of) but if you really think that it's not meant to be compiled with another compiler (code for very specific embedded hardware for example) then you should just check your compiler documentation about #pragma once
to know what you're really doing.
You could do this:
sample_data = data[sample(nrow(data), sample_size, replace = FALSE), ]
More one example:
Follow:
qryAux = (from q in qryAux where
q.OrdSeq == (from pp in Sessao.Query<NameTable>() where pp.FieldPk
== q.FieldPk select pp.OrdSeq).Max() select q);
Equals:
select t.* from nametable t where t.OrdSeq =
(select max(t2.OrdSeq) from nametable t2 where t2.FieldPk= t.FieldPk)
For a VB or C# app, one way to do this is by using Office Interop. This depends on which version of Excel you're working with.
For Excel 2003, this MSDN article is a good place to start. Understanding the Excel Object Model from a Visual Studio 2005 Developer's Perspective
You'll basically need to do the following:
Excel.Range allCellsRng;
string lowerRightCell = "IV65536";
allCellsRng = ws.get_Range("A1", lowerRightCell).Cells;
foreach (Range cell in allCellsRng)
{
if (null == cell.Value2 || isBlank(cell.Value2))
{
// Do something.
}
else if (isText(cell.Value2))
{
// Do something.
}
else if (isNumeric(cell.Value2))
{
// Do something.
}
}
For Excel 2007, try this MSDN reference.
I modified Sanoj Dushmantha's answer to use sessionStorage instead of localStorage. However, despite the documentation, browsers will still store this data even after the browser is closed. To fix this issue, I am removing the scroll position after it is reset.
<script>
document.addEventListener("DOMContentLoaded", function (event) {
var scrollpos = sessionStorage.getItem('scrollpos');
if (scrollpos) {
window.scrollTo(0, scrollpos);
sessionStorage.removeItem('scrollpos');
}
});
window.addEventListener("beforeunload", function (e) {
sessionStorage.setItem('scrollpos', window.scrollY);
});
</script>
The JQuery answer. Since JavaScript was invented in order to develop JQuery, I am giving you an example in JQuery doing this:
<div class="menu">
<a href="http://example.org">Example</a>
<a href="http://foobar.com">Foobar.com</a>
</div>
<script>
jQuery( 'div.menu a' )
.click(function() {
do_the_click( this.href );
return false;
});
// play the funky music white boy
function do_the_click( url )
{
alert( url );
}
</script>
cat dump.txt | head -16224 | tail -258
should do the trick. The downside of this approach is that you need to do the arithmetic to determine the argument for tail and to account for whether you want the 'between' to include the ending line or not.
Edit:
Alternatively to hasProperty
you can try hamcrest-more-matchers where
matcher with extracting function. In your case it will look like:
import static com.github.seregamorph.hamcrest.MoreMatchers.where;
assertThat(myClass.getMyItems(), contains(
where(MyItem::getName, is("foo")),
where(MyItem::getName, is("bar"))
));
The advantages of this approach are:
Expected: iterable containing [Object that matches is "foo" after call
MyItem.getName, Object that matches is "bar" after call MyItem.getName]
but: item 0: was "wrong-name"
Or you could use String.ToCharArray().
It takes int startindex
and and int length
as parameters and returns a char[]
new string(stringValue.ToCharArray(0,5))
You would still need to make sure the string has the proper length, otherwise it will throw a ArgumentOutOfRangeException
You can try
$string = "this is the test for string." ;
$string = str_replace(' ', '_', $string);
$string = substr($string,0,10);
var_dump($string);
Output
this_is_th
You just want a command which gives exactly the same result as rm -rf local_repo && git clone remote_url
, right? I also want this feature. I wonder why git does not provide such a command (such as git reclone
or git sync
), neither does svn provide such a command (such as svn recheckout
or svn sync
).
Try the following command:
git reset --hard origin/master
git clean -fxd
git pull
In order to understand the use of curly braces in import
statements, first, you have to understand the concept of destructuring introduced in ES6
Object destructuring
var bodyBuilder = {
firstname: 'Kai',
lastname: 'Greene',
nickname: 'The Predator'
};
var {firstname, lastname} = bodyBuilder;
console.log(firstname, lastname); // Kai Greene
firstname = 'Morgan';
lastname = 'Aste';
console.log(firstname, lastname); // Morgan Aste
Array destructuring
var [firstGame] = ['Gran Turismo', 'Burnout', 'GTA'];
console.log(firstGame); // Gran Turismo
Using list matching
var [,secondGame] = ['Gran Turismo', 'Burnout', 'GTA'];
console.log(secondGame); // Burnout
Using the spread operator
var [firstGame, ...rest] = ['Gran Turismo', 'Burnout', 'GTA'];
console.log(firstGame);// Gran Turismo
console.log(rest);// ['Burnout', 'GTA'];
Now that we've got that out of our way, in ES6 you can export multiple modules. You can then make use of object destructuring like below.
Let's assume you have a module called module.js
export const printFirstname(firstname) => console.log(firstname);
export const printLastname(lastname) => console.log(lastname);
You would like to import the exported functions into index.js
;
import {printFirstname, printLastname} from './module.js'
printFirstname('Taylor');
printLastname('Swift');
You can also use different variable names like so
import {printFirstname as pFname, printLastname as pLname} from './module.js'
pFname('Taylor');
pLanme('Swift');
\
does the job. @Guillaume's answer and @George's comment clearly answer this question. Here I explains why The backslash has to be the very last character before the end of line character.
Consider this command:
mysql -uroot \ -hlocalhost
If there is a space after \
, the line continuation will not work. The reason is that \
removes the special meaning for the next character which is a space not the invisible line feed character. The line feed character is after the space not \
in this example.
i think there should be entry of your your_module.ko in /lib/modules/uname -r
/modules.dep and in /lib/modules/uname -r
/modules.dep.bin for "modprobe your_module" command to work
This will round down correctly as well:
>>> n = 46
>>> rem = n % 10
>>> if rem < 5:
... n = int(n / 10) * 10
... else:
... n = int((n + 10) / 10) * 10
...
>>> 50
Why use GROUP BY at all?
SELECT player_name, player_salary, player_salary*1.1 AS NewSalary
FROM players
ORDER BY player_salary DESC
Building on Peter N's answer, this is an example of how to add (optional) user-specified arguments to pass to Java main for a JavaExec task (since you can't set the 'args' property manually for the reason he cites.)
Add this to the task:
task(runProgram, type: JavaExec) {
[...]
if (project.hasProperty('myargs')) {
args(myargs.split(','))
}
... and run at the command line like this
% ./gradlew runProgram '-Pmyargs=-x,7,--no-kidding,/Users/rogers/tests/file.txt'
My issue was the Eclipse Marketplace client needed updating.
After trying Fredriks solution of
Go to Window -> Preferences -> Install/update: Available Software sites. Then remove and add the indigo site. Just remember to copy the adress so you can add it again.
The Marketplace client wouldn't load. But I could access it via a browser. So, I went to the Help -> Eclipse Marketplace it loaded fine
Clicked on Installed and found the Eclipse Marketplace Client and it had so i clicked it it updated and then when I did the standard update everything worked.
For fun, I popped List into reflector and this is the resulting C#:
public void ForEach(Action<T> action)
{
if (action == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.match);
}
for (int i = 0; i < this._size; i++)
{
action(this._items[i]);
}
}
Similarly, the MoveNext in Enumerator which is what is used by foreach is this:
public bool MoveNext()
{
if (this.version != this.list._version)
{
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumFailedVersion);
}
if (this.index < this.list._size)
{
this.current = this.list._items[this.index];
this.index++;
return true;
}
this.index = this.list._size + 1;
this.current = default(T);
return false;
}
The List.ForEach is much more trimmed down than MoveNext - far less processing - will more likely JIT into something efficient..
In addition, foreach() will allocate a new Enumerator no matter what. The GC is your friend, but if you're doing the same foreach repeatedly, this will make more throwaway objects, as opposed to reusing the same delegate - BUT - this is really a fringe case. In typical usage you will see little or no difference.
Use a temp table to insert the range of values, then select the min/max of the temp table from within a stored procedure or UDF. This is a basic construct, so feel free to revise as needed.
For example:
CREATE PROCEDURE GetMinSpeed() AS
BEGIN
CREATE TABLE #speed (Driver NVARCHAR(10), SPEED INT);
'
' Insert any number of data you need to sort and pull from
'
INSERT INTO #speed (N'Petty', 165)
INSERT INTO #speed (N'Earnhardt', 172)
INSERT INTO #speed (N'Patrick', 174)
SELECT MIN(SPEED) FROM #speed
DROP TABLE #speed
END
Laravel Eloquent provides destroy()
function in which returns boolean
value. So if a record exists on the database and deleted you'll get true
otherwise false
.
Here's an example using Laravel Tinker shell.
In this case, your code should look like this:
public function destroy($id)
{
$res = User::destroy($id);
if ($res) {
return response()->json([
'status' => '1',
'msg' => 'success'
]);
} else {
return response()->json([
'status' => '0',
'msg' => 'fail'
]);
}
}
More info about Laravel Eloquent Deleting Models
As ping
works, but telnet
to port 80
does not, the HTTP port 80
is closed on your machine. I assume that your browser's HTTP connection goes through a proxy (as browsing works, how else would you read stackoverflow?).
You need to add some code to your python program, that handles the proxy, like described here:
This is an example for a RelativeLayout:
RelativeLayout relativeLayout=(RelativeLayout)vi.findViewById(R.id.RelativeLayoutLeft);
RelativeLayout.LayoutParams params = (RelativeLayout.LayoutParams)relativeLayout.getLayoutParams();
params.addRule(RelativeLayout.ALIGN_PARENT_RIGHT);
relativeLayout.setLayoutParams(params);
With another kind of layout (example LinearLayout) you just simply has to change RelativeLayout for LinearLayout.
In order to make a pull request you need to do the following steps:
It took me a while to figure this, hope this will help someone.
If you do not use DockerFile, you can simply add permission as command line argument of the bash:
docker run -t <image> /bin/bash -c "chmod +x /usr/src/app/docker-entrypoint.sh; /usr/src/app/docker-entrypoint.sh"
I'm certain there are better ways of doing this, but I have in the past used a method something like the following to serialize an object into a string that I can log:
private string ObjectToXml(object output)
{
string objectAsXmlString;
System.Xml.Serialization.XmlSerializer xs = new System.Xml.Serialization.XmlSerializer(output.GetType());
using (System.IO.StringWriter sw = new System.IO.StringWriter())
{
try
{
xs.Serialize(sw, output);
objectAsXmlString = sw.ToString();
}
catch (Exception ex)
{
objectAsXmlString = ex.ToString();
}
}
return objectAsXmlString;
}
You'll see that the method might also return the exception rather than the serialized object, so you'll want to ensure that the objects you want to log are serializable.
In pandas 0.16.1+ you can drop columns only if they exist per the solution posted by @eiTanLaVi. Prior to that version, you can achieve the same result via a conditional list comprehension:
df.drop([col for col in ['col_name_1','col_name_2',...,'col_name_N'] if col in df],
axis=1, inplace=True)
Something like this in your form main. Double click the form in the visual editor to create the form load event.
Timer Clock=new Timer();
Clock.Interval=2700000; // not sure if this length of time will work
Clock.Start();
Clock.Tick+=new EventHandler(Timer_Tick);
Then add an event handler to do something when the timer fires.
public void Timer_Tick(object sender,EventArgs eArgs)
{
if(sender==Clock)
{
// do something here
}
}
Your script could be stopping, not because of the PHP timeout but because of the timeout in the browser you're using to access the script (ie. Firefox, Chrome, etc). Unfortunately there's seldom an easy way to extend this timeout, and in most browsers you simply can't. An option you have here is to access the script over a terminal. For example, on Windows you would make sure the PHP executable is in your path variable and then I think you execute:
C:\path\to\script> php script.php
Or, if you're using the PHP CGI, I think it's:
C:\path\to\script> php-cgi script.php
Plus, you would also set ini_set('max_execution_time', 0);
in your script as others have mentioned. When running a PHP script this way, I'm pretty sure you can use buffer flushing to echo out the script's progress to the terminal periodically if you wish. The biggest issue I think with this method is there's really no way of stopping the script once it's started, other than stopping the entire PHP process or service.
ALTER TABLE Table name ADD column datatype AUTO_INCREMENT,ADD primary key(column);
If the number of fields in the CSV is constant then you could do something like this:
select a[1], a[2], a[3], a[4]
from (
select regexp_split_to_array('a,b,c,d', ',')
) as dt(a)
For example:
=> select a[1], a[2], a[3], a[4] from (select regexp_split_to_array('a,b,c,d', ',')) as dt(a);
a | a | a | a
---+---+---+---
a | b | c | d
(1 row)
If the number of fields in the CSV is not constant then you could get the maximum number of fields with something like this:
select max(array_length(regexp_split_to_array(csv, ','), 1))
from your_table
and then build the appropriate a[1], a[2], ..., a[M]
column list for your query. So if the above gave you a max of 6, you'd use this:
select a[1], a[2], a[3], a[4], a[5], a[6]
from (
select regexp_split_to_array(csv, ',')
from your_table
) as dt(a)
You could combine those two queries into a function if you wanted.
For example, give this data (that's a NULL in the last row):
=> select * from csvs;
csv
-------------
1,2,3
1,2,3,4
1,2,3,4,5,6
(4 rows)
=> select max(array_length(regexp_split_to_array(csv, ','), 1)) from csvs;
max
-----
6
(1 row)
=> select a[1], a[2], a[3], a[4], a[5], a[6] from (select regexp_split_to_array(csv, ',') from csvs) as dt(a);
a | a | a | a | a | a
---+---+---+---+---+---
1 | 2 | 3 | | |
1 | 2 | 3 | 4 | |
1 | 2 | 3 | 4 | 5 | 6
| | | | |
(4 rows)
Since your delimiter is a simple fixed string, you could also use string_to_array
instead of regexp_split_to_array
:
select ...
from (
select string_to_array(csv, ',')
from csvs
) as dt(a);
Thanks to Michael for the reminder about this function.
You really should redesign your database schema to avoid the CSV column if at all possible. You should be using an array column or a separate table instead.
Date.parse()
isn't a constructor, its a static method.
So, just use
var timeInMillis = Date.parse(s);
instead of
var timeInMillis = new Date.parse(s);
I came across the same issue while working on a selenium project(maven). The Project folder and pom.xml were showing red cross symbol. This was coming as i had the test datasheet open. I could remove the error by just closing the datasheet and the never faced the issue again
You are passing a reference to the list, but your aren't passing the list variable by reference - so when you call ChangeList
the value of the variable (i.e. the reference - think "pointer") is copied - and changes to the value of the parameter inside ChangeList
aren't seen by TestMethod
.
try:
private void ChangeList(ref List<int> myList) {...}
...
ChangeList(ref myList);
This then passes a reference to the local-variable myRef
(as declared in TestMethod
); now, if you reassign the parameter inside ChangeList
you are also reassigning the variable inside TestMethod
.
You can let the user set a read-only property by providing it through the constructor:
public class Person
{
public Person(int id)
{
this.Id = id;
}
public string Name { get; set; }
public int Id { get; private set; }
public int Age { get; set; }
}
Just to address the last part of your question, since that really points out the difference between a list
and vector
in R:
Why do these two expressions not return the same result?
x = list(1, 2, 3, 4); x2 = list(1:4)
A list can contain any other class as each element. So you can have a list where the first element is a character vector, the second is a data frame, etc. In this case, you have created two different lists. x
has four vectors, each of length 1. x2
has 1 vector of length 4:
> length(x[[1]])
[1] 1
> length(x2[[1]])
[1] 4
So these are completely different lists.
R lists are very much like a hash map data structure in that each index value can be associated with any object. Here's a simple example of a list that contains 3 different classes (including a function):
> complicated.list <- list("a"=1:4, "b"=1:3, "c"=matrix(1:4, nrow=2), "d"=search)
> lapply(complicated.list, class)
$a
[1] "integer"
$b
[1] "integer"
$c
[1] "matrix"
$d
[1] "function"
Given that the last element is the search function, I can call it like so:
> complicated.list[["d"]]()
[1] ".GlobalEnv" ...
As a final comment on this: it should be noted that a data.frame
is really a list (from the data.frame
documentation):
A data frame is a list of variables of the same number of rows with unique row names, given class ‘"data.frame"’
That's why columns in a data.frame
can have different data types, while columns in a matrix cannot. As an example, here I try to create a matrix with numbers and characters:
> a <- 1:4
> class(a)
[1] "integer"
> b <- c("a","b","c","d")
> d <- cbind(a, b)
> d
a b
[1,] "1" "a"
[2,] "2" "b"
[3,] "3" "c"
[4,] "4" "d"
> class(d[,1])
[1] "character"
Note how I cannot change the data type in the first column to numeric because the second column has characters:
> d[,1] <- as.numeric(d[,1])
> class(d[,1])
[1] "character"
OK, so services are started, there is an ethernet path between them, name resolution works, linked servers work, and you disabled transaction authentication.
My gut says firewall issue, but a few things come to mind...
Knockout's documentation also mentions a much cleaner way of passing extra parameters to functions bound using an on-click
binding using function.bind like this:
<button data-bind="click: myFunction.bind($data, 'param1', 'param2')">
Click me
</button>
Put it in .gitignore
. But from the gitignore(5)
man page:
· If the pattern does not contain a slash /, git treats it as a shell glob pattern and checks for a match against the pathname relative to the location of the .gitignore file (relative to the toplevel of the work tree if not from a .gitignore file). · Otherwise, git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, "Documentation/*.html" matches "Documentation/git.html" but not "Documentation/ppc/ppc.html" or "tools/perf/Documentation/perf.html".
So, either specify the full path to the appropriate *.pyc
entry, or put it in a .gitignore
file in any of the directories leading from the repository root (inclusive).
With Dispatcher you will need to include
using System.Windows.Threading;
Also note that if you right-click DispatcherTimer and click Resolve it should add the appropriate references.
The solution provided by Barry above is excellent, but there's is the "flicker-problem".
As soon as the Value is above zero the OnPaint will be envoked repeatedly and the text will flicker.
There is a solution to this. We do not need VisualStyles for the object since we will be drawing it with our own code.
Add the following code to the custom object Barry wrote and you will avoid the flicker:
[DllImportAttribute("uxtheme.dll")]
private static extern int SetWindowTheme(IntPtr hWnd, string appname, string idlist);
protected override void OnHandleCreated(EventArgs e)
{
SetWindowTheme(this.Handle, "", "");
base.OnHandleCreated(e);
}
I did not write this myself. It found it here: https://stackoverflow.com/a/299983/1163954
I've testet it and it works.
logger
logs to syslog facilities. If you want the message to go to a particular file you have to modify the syslog configuration accordingly. You could add a line like this:
local7.* -/var/log/mycustomlog
and restart syslog. Then you can log like this:
logger -p local7.info "information message"
logger -p local7.err "error message"
and the messages will appear in the desired logfile with the correct log level.
Without making changes to the syslog configuration you could use logger
like this:
logger -s "foo bar" >> /var/log/mycustomlog
That would instruct logger
to print the message to STDERR as well (in addition to logging it to syslog), so you could redirect STDERR to a file. However, it would be utterly pointless, because the message is already logged via syslog anyway (with the default priority user.notice
).
I use this piece of code . works fine for me .copy it and run on your computer. you'll understand by yourself .
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector <vector <int> > matrix;
size_t row=3 , col=3 ;
for(int i=0,cnt=1 ; i<row ; i++)
{
for(int j=0 ; j<col ; j++)
{
vector <int> colVector ;
matrix.push_back(colVector) ;
matrix.at(i).push_back(cnt++) ;
}
}
matrix.at(1).at(1) = 0; //matrix.at(columns).at(rows) = intValue
//printing all elements
for(int i=0,cnt=1 ; i<row ; i++)
{
for(int j=0 ; j<col ; j++)
{
cout<<matrix[i][j] <<" " ;
}
cout<<endl ;
}
}
In IIS >= 7, a <webLimits>
section has replaced ConnectionTimeout
, HeaderWaitTimeout
, MaxGlobalBandwidth
, and MinFileBytesPerSec
IIS 6 metabase settings.
Example Configuration:
<configuration>
<system.applicationHost>
<webLimits connectionTimeout="00:01:00"
dynamicIdleThreshold="150"
headerWaitTimeout="00:00:30"
minBytesPerSecond="500"
/>
</system.applicationHost>
</configuration>
For reference: more information regarding these settings in IIS can be found here. Also, I was unable to add this section to the web.config via the IIS manager's "configuration editor", though it did show up once I added it and searched the configuration.
Once I required 2-D arrayList and I created using List and ArrayList and the code is as follows:
import java.util.*;
public class ArrayListMatrix {
public static void main(String args[]){
List<ArrayList<Integer>> a = new ArrayList<>();
ArrayList<Integer> a1 = new ArrayList<Integer>();
ArrayList<Integer> a2 = new ArrayList<Integer>();
ArrayList<Integer> a3 = new ArrayList<Integer>();
a1.add(1);
a1.add(2);
a1.add(3);
a2.add(4);
a2.add(5);
a2.add(6);
a3.add(7);
a3.add(8);
a3.add(9);
a.add(a1);
a.add(a2);
a.add(a3);
for(ArrayList obj:a){
ArrayList<Integer> temp = obj;
for(Integer job : temp){
System.out.print(job+" ");
}
System.out.println();
}
}
}
Output:
1 2 3
4 5 6
7 8 9
Source : https://www.codepuran.com/java/2d-matrix-arraylist-collection-class-java/
Name your Interface
what it is. Truck
. Not ITruck
because it isn't an ITruck
it is a Truck
.
An Interface
in Java is a Type. Then you have DumpTruck
, TransferTruck
, WreckerTruck
, CementTruck
, etc that implement Truck
.
When you are using the Interface
in place of a sub-class you just cast it to Truck
. As in List<Truck>
. Putting I
in front is just Hungarian style notation tautology that adds nothing but more stuff to type to your code.
All modern Java IDE's mark Interfaces and Implementations and what not without this silly notation. Don't call it TruckClass
that is tautology just as bad as the IInterface
tautology.
If it is an implementation it is a class. The only real exception to this rule, and there are always exceptions, could be something like AbstractTruck
. Since only the sub-classes will ever see this and you should never cast to an Abstract
class it does add some information that the class is abstract and to how it should be used. You could still come up with a better name than AbstractTruck
and use BaseTruck
or DefaultTruck
instead since the abstract
is in the definition. But since Abstract
classes should never be part of any public facing interface I believe it is an acceptable exception to the rule. Making the constructors protected
goes a long way to crossing this divide.
And the Impl
suffix is just more noise as well. More tautology. Anything that isn't an interface is an implementation, even abstract classes which are partial implementations. Are you going to put that silly Impl
suffix on every name of every Class?
The Interface
is a contract on what the public methods and properties have to support, it is also Type information as well. Everything that implements Truck
is a Type of Truck
.
Look to the Java standard library itself. Do you see IList
, ArrayListImpl
, LinkedListImpl
? No, you see List
and ArrayList
, and LinkedList
. Here is a nice article about this exact question. Any of these silly prefix/suffix naming conventions all violate the DRY principle as well.
Also, if you find yourself adding DTO
, JDO
, BEAN
or other silly repetitive suffixes to objects then they probably belong in a package instead of all those suffixes. Properly packaged namespaces are self documenting and reduce all the useless redundant information in these really poorly conceived proprietary naming schemes that most places don't even internally adhere to in a consistent manner.
If all you can come up with to make your Class
name unique is suffixing it with Impl
, then you need to rethink having an Interface
at all. So when you have a situation where you have an Interface
and a single Implementation
that is not uniquely specialized from the Interface
you probably don't need the Interface
.
The solution given by Ash Clarke for subdomains works great, but please note that you need to include the document.domain = "mydomain.com"; in both the head of the iframe page and the head of the parent page, as stated in the link same origin policy checks
An important extension to the same origin policy implemented for JavaScript DOM access (but not for most of the other flavors of same-origin checks) is that two sites sharing a common top-level domain may opt to communicate despite failing the "same host" check by mutually setting their respective document.domain DOM property to the same qualified, right-hand fragment of their current host name. For example, if http://en.example.com/ and http://fr.example.com/ both set document.domain to "example.com", they would be from that point on considered same-origin for the purpose of DOM manipulation.
Decode them and compare them as mgilson comment.
Order does not matter for dictionary as long as the keys, and values matches. (Dictionary has no order in Python)
>>> {'a': 1, 'b': 2} == {'b': 2, 'a': 1}
True
But order is important in list; sorting will solve the problem for the lists.
>>> [1, 2] == [2, 1]
False
>>> [1, 2] == sorted([2, 1])
True
>>> a = '{"errors": [{"error": "invalid", "field": "email"}, {"error": "required", "field": "name"}], "success": false}'
>>> b = '{"errors": [{"error": "required", "field": "name"}, {"error": "invalid", "field": "email"}], "success": false}'
>>> a, b = json.loads(a), json.loads(b)
>>> a['errors'].sort()
>>> b['errors'].sort()
>>> a == b
True
Above example will work for the JSON in the question. For general solution, see Zero Piraeus's answer.
<android.support.v7.widget.AppCompatCheckBox
android:id="@+id/checkBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:layoutDirection="rtl"
android:text="text" />`
SELECT name
FROM sys.databases
You'll only see the databases you have permission to see.
Just run this SQL to convert all database tables at once. Change your COLLATION and databaseName to what you need.
SELECT CONCAT("ALTER TABLE ", TABLE_SCHEMA, '.', TABLE_NAME," COLLATE utf8_general_ci;") AS ExecuteTheString
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA="databaseName"
AND TABLE_TYPE="BASE TABLE";
You ca try this SQL
select * from employee where rec_date between '2017-09-01' and '2017-09-11'
I ran into the same issue after upgrading Git. Turns out I switched from 32-bit to 64-bit Git and I didn't realize it. TortoiseGit was still looking for "C:\Program Files (x86)\Git\bin", which didn't exist. Right-click the folder, go to Tortoise Git > Settings > General and update the Git.exe path.
Frequently we deal with other fellow java programmers work which create these Stored Procedure. and we do not want to mess around with it. but there is possibility you get the result set where these exec sample return 0 (almost Stored procedure call returning zero).
check this sample :
public void generateINOUT(String USER, int DPTID){
try {
conUrl = JdbcUrls + dbServers +";databaseName="+ dbSrcNames+";instance=MSSQLSERVER";
con = DriverManager.getConnection(conUrl,dbUserNames,dbPasswords);
//stat = con.createStatement();
con.setAutoCommit(false);
Statement st = con.createStatement();
st.executeUpdate("DECLARE @RC int\n" +
"DECLARE @pUserID nvarchar(50)\n" +
"DECLARE @pDepartmentID int\n" +
"DECLARE @pStartDateTime datetime\n" +
"DECLARE @pEndDateTime datetime\n" +
"EXECUTE [AccessManager].[dbo].[SP_GenerateInOutDetailReportSimple] \n" +
""+USER +
"," +DPTID+
",'"+STARTDATE +
"','"+ENDDATE+"'");
ResultSet rs = st.getGeneratedKeys();
while (rs.next()){
String userID = rs.getString("UserID");
Timestamp timeIN = rs.getTimestamp("timeIN");
Timestamp timeOUT = rs.getTimestamp ("timeOUT");
int totTime = rs.getInt ("totalTime");
int pivot = rs.getInt ("pivotvalue");
timeINS = sdz.format(timeIN);
userIN.add(timeINS);
timeOUTS = sdz.format(timeOUT);
userOUT.add(timeOUTS);
System.out.println("User : "+userID+" |IN : "+timeIN+" |OUT : "+timeOUT+"| Total Time : "+totTime+" | PivotValue : "+pivot);
}
con.commit();
}catch (Exception e) {
e.printStackTrace();
System.out.println(e);
if (e.getCause() != null) {
e.getCause().printStackTrace();}
}
}
I came to this solutions after few days trial and error, googling and get confused ;) it execute below Stored Procedure :
USE [AccessManager]
GO
/****** Object: StoredProcedure [dbo].[SP_GenerateInOutDetailReportSimple]
Script Date: 04/05/2013 15:54:11 ******/
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[SP_GenerateInOutDetailReportSimple]
(
@pUserID nvarchar(50),
@pDepartmentID int,
@pStartDateTime datetime,
@pEndDateTime datetime
)
AS
Declare @ErrorCode int
Select @ErrorCode = @@Error
Declare @TransactionCountOnEntry int
If @ErrorCode = 0
Begin
Select @TransactionCountOnEntry = @@TranCount
BEGIN TRANSACTION
End
If @ErrorCode = 0
Begin
-- Create table variable instead of SQL temp table because report wont pick up the temp table
DECLARE @tempInOutDetailReport TABLE
(
UserID nvarchar(50),
LogDate datetime,
LogDay varchar(20),
TimeIN datetime,
TimeOUT datetime,
TotalTime int,
RemarkTimeIn nvarchar(100),
RemarkTimeOut nvarchar(100),
TerminalIPTimeIn varchar(50),
TerminalIPTimeOut varchar(50),
TerminalSNTimeIn nvarchar(50),
TerminalSNTimeOut nvarchar(50),
PivotValue int
)
-- Declare variables for the while loop
Declare @LogUserID nvarchar(50)
Declare @LogEventID nvarchar(50)
Declare @LogTerminalSN nvarchar(50)
Declare @LogTerminalIP nvarchar(50)
Declare @LogRemark nvarchar(50)
Declare @LogTimestamp datetime
Declare @LogDay nvarchar(20)
-- Filter off userID, departmentID, StartDate and EndDate if specified, only process the remaining logs
-- Note: order by user then timestamp
Declare LogCursor Cursor For
Select distinct access_event_logs.USERID, access_event_logs.EVENTID,
access_event_logs.TERMINALSN, access_event_logs.TERMINALIP,
access_event_logs.REMARKS, access_event_logs.LOCALTIMESTAMP, Datename(dw,access_event_logs.LOCALTIMESTAMP) AS WkDay
From access_event_logs
Left Join access_user on access_user.User_ID = access_event_logs.USERID
Left Join access_user_dept on access_user.User_ID = access_user_dept.User_ID
Where ((Dept_ID = @pDepartmentID) OR (@pDepartmentID IS NULL))
And ((access_event_logs.USERID LIKE '%' + @pUserID + '%') OR (@pUserID IS NULL))
And ((access_event_logs.LOCALTIMESTAMP >= @pStartDateTime ) OR (@pStartDateTime IS NULL))
And ((access_event_logs.LOCALTIMESTAMP < DATEADD(day, 1, @pEndDateTime) ) OR (@pEndDateTime IS NULL))
And (access_event_logs.USERID != 'UNKNOWN USER') -- Ignore UNKNOWN USER
Order by access_event_logs.USERID, access_event_logs.LOCALTIMESTAMP
Open LogCursor
Fetch Next
From LogCursor
Into @LogUserID, @LogEventID, @LogTerminalSN, @LogTerminalIP, @LogRemark, @LogTimestamp, @LogDay
-- Temp storage for IN event details
Declare @InEventUserID nvarchar(50)
Declare @InEventDay nvarchar(20)
Declare @InEventTimestamp datetime
Declare @InEventRemark nvarchar(100)
Declare @InEventTerminalIP nvarchar(50)
Declare @InEventTerminalSN nvarchar(50)
-- Temp storage for OUT event details
Declare @OutEventUserID nvarchar(50)
Declare @OutEventTimestamp datetime
Declare @OutEventRemark nvarchar(100)
Declare @OutEventTerminalIP nvarchar(50)
Declare @OutEventTerminalSN nvarchar(50)
Declare @CurrentUser varchar(50) -- used to indicate when we change user group
Declare @CurrentDay varchar(50) -- used to indicate when we change day
Declare @FirstEvent int -- indicate the first event we received
Declare @ReceiveInEvent int -- indicate we have received an IN event
Declare @PivotValue int -- everytime we change user or day - we reset it (reporting purpose), if same user..keep increment its value
Declare @CurrTrigger varchar(50) -- used to keep track of the event of the current event log trigger it is handling
Declare @CurrTotalHours int -- used to keep track of total hours of the day of the user
Declare @FirstInEvent datetime
Declare @FirstInRemark nvarchar(100)
Declare @FirstInTerminalIP nvarchar(50)
Declare @FirstInTerminalSN nvarchar(50)
Declare @FirstRecord int -- indicate another day of same user
Set @PivotValue = 0 -- initialised
Set @CurrentUser = '' -- initialised
Set @FirstEvent = 1 -- initialised
Set @ReceiveInEvent = 0 -- initialised
Set @CurrTrigger = '' -- Initialised
Set @CurrTotalHours = 0 -- initialised
Set @FirstRecord = 1 -- initialised
Set @CurrentDay = '' -- initialised
While @@FETCH_STATUS = 0
Begin
-- use to track current log trigger
Set @CurrTrigger =LOWER(@LogEventID)
If (@CurrentUser != '' And @CurrentUser != @LogUserID) -- new batch of user
Begin
If @ReceiveInEvent = 1 -- previous IN event is not cleared (no OUT is found)
Begin
-- Check day
If (@CurrentDay != @InEventDay) -- change to another day
Set @PivotValue = 0 -- Reset
Else -- same day
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @InEventDay -- update the day
-- invalid row (only has IN event)
Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, RemarkTimeIn, TerminalIPTimeIn,
TerminalSNTimeIn, PivotValue, LogDate )
values( @InEventUserID, @InEventDay, @InEventTimestamp, @InEventRemark, @InEventTerminalIP,
@InEventTerminalSN, @PivotValue, DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
End
Set @FirstEvent = 1 -- Reset flag (we are having a new user group)
Set @ReceiveInEvent = 0 -- Reset
Set @PivotValue = 0 -- Reset
--Set @CurrentDay = '' -- Reset
End
If LOWER(@LogEventID) = 'in' -- IN event
Begin
If @ReceiveInEvent = 1 -- previous IN event is not cleared (no OUT is found)
Begin
-- Check day
If (@CurrentDay != @InEventDay) -- change to another day
Begin
Set @PivotValue = 0 -- Reset
--Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, TimeOUT, TotalTime, RemarkTimeIn,
-- RemarkTimeOut, TerminalIPTimeIn, TerminalIPTimeOut, TerminalSNTimeIn, TerminalSNTimeOut, PivotValue,
-- LogDate)
--values( @LogUserID, @CurrentDay, @FirstInEvent, @LogTimestamp, @CurrTotalHours,
-- @FirstInRemark, @LogRemark, @FirstInTerminalIP, @LogTerminalIP, @FirstInTerminalSN, @LogTerminalSN, @PivotValue,
-- DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
End
Else
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @InEventDay -- update the day
-- invalid row (only has IN event)
Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, RemarkTimeIn, TerminalIPTimeIn,
TerminalSNTimeIn, PivotValue, LogDate )
values( @InEventUserID, @InEventDay, @InEventTimestamp, @InEventRemark, @InEventTerminalIP,
@InEventTerminalSN, @PivotValue, DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
End
If((@CurrentDay != @LogDay And @CurrentDay != '') Or (@CurrentUser != @LogUserID And @CurrentUser != '') )
Begin
Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, TimeOUT, TotalTime, RemarkTimeIn,
RemarkTimeOut, TerminalIPTimeIn, TerminalIPTimeOut, TerminalSNTimeIn, TerminalSNTimeOut, PivotValue,
LogDate)
values( @CurrentUser, @CurrentDay, @FirstInEvent, @OutEventTimestamp, @CurrTotalHours,
@FirstInRemark, @OutEventRemark, @FirstInTerminalIP, @OutEventTerminalIP, @FirstInTerminalSN, @LogTerminalSN, @PivotValue,
DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
Set @FirstRecord = 1
End
-- Save it
Set @InEventUserID = @LogUserID
Set @InEventDay = @LogDay
Set @InEventTimestamp = @LogTimeStamp
Set @InEventRemark = @LogRemark
Set @InEventTerminalIP = @LogTerminalIP
Set @InEventTerminalSN = @LogTerminalSN
If (@FirstRecord = 1) -- save for first in event record of the day
Begin
Set @FirstInEvent = @LogTimestamp
Set @FirstInRemark = @LogRemark
Set @FirstInTerminalIP = @LogTerminalIP
Set @FirstInTerminalSN = @LogTerminalSN
Set @CurrTotalHours = 0 --initialise total hours for another day
End
Set @FirstRecord = 0 -- no more first record of the day
Set @ReceiveInEvent = 1 -- indicate we have received an "IN" event
Set @FirstEvent = 0 -- no more "first" event
End
Else If LOWER(@LogEventID) = 'out' -- OUT event
Begin
If @FirstEvent = 1 -- the first OUT record when change users
Begin
-- Check day
If (@CurrentDay != @LogDay) -- change to another day
Set @PivotValue = 0 -- Reset
Else
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @LogDay -- update the day
-- Only an OUT event (no IN event) - invalid record but we show it anyway
Insert into @tempInOutDetailReport( UserID, LogDay, TimeOUT, RemarkTimeOut, TerminalIPTimeOut, TerminalSNTimeOut,
PivotValue, LogDate )
values( @LogUserID, @LogDay, @LogTimestamp, @LogRemark, @LogTerminalIP, @LogTerminalSN, @PivotValue,
DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @LogTimestamp)))
Set @FirstEvent = 0 -- not "first" anymore
End
Else -- Not first event
Begin
If @ReceiveInEvent = 1 -- if there are IN event previously
Begin
-- Check day
If (@CurrentDay != @InEventDay) -- change to another day
Set @PivotValue = 0 -- Reset
Else
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @InEventDay -- update the day
Set @CurrTotalHours = @CurrTotalHours + DATEDIFF(second,@InEventTimestamp, @LogTimeStamp) -- update total time
Set @OutEventRemark = @LogRemark
Set @OutEventTerminalIP = @LogTerminalIP
Set @OutEventTerminalSN = @LogTerminalSN
Set @OutEventTimestamp = @LogTimestamp
-- valid row
--Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, TimeOUT, TotalTime, RemarkTimeIn,
-- RemarkTimeOut, TerminalIPTimeIn, TerminalIPTimeOut, TerminalSNTimeIn, TerminalSNTimeOut, PivotValue,
-- LogDate)
--values( @LogUserID, @InEventDay, @InEventTimestamp, @LogTimestamp, Datediff(second, @InEventTimestamp, @LogTimeStamp),
-- @InEventRemark, @LogRemark, @InEventTerminalIP, @LogTerminalIP, @InEventTerminalSN, @LogTerminalSN, @PivotValue,
-- DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
Set @ReceiveInEvent = 0 -- Reset
End
Else -- no IN event previously
Begin
-- Check day
If (@CurrentDay != @LogDay) -- change to another day
Set @PivotValue = 0 -- Reset
Else
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @LogDay -- update the day
-- invalid row (only has OUT event)
Insert into @tempInOutDetailReport( UserID, LogDay, TimeOUT, RemarkTimeOut, TerminalIPTimeOut, TerminalSNTimeOut,
PivotValue, LogDate )
values( @LogUserID, @LogDay, @LogTimestamp, @LogRemark, @LogTerminalIP, @LogTerminalSN, @PivotValue,
DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @LogTimestamp)) )
End
End
End
Set @CurrentUser = @LogUserID -- update user
Fetch Next
From LogCursor
Into @LogUserID, @LogEventID, @LogTerminalSN, @LogTerminalIP, @LogRemark, @LogTimestamp, @LogDay
End
-- Need to handle the last log if its IN log as it will not be processed by the while loop
if @CurrTrigger='in'
Begin
-- Check day
If (@CurrentDay != @InEventDay) -- change to another day
Set @PivotValue = 0 -- Reset
Else -- same day
Set @PivotValue = @PivotValue + 1 -- increment
Set @CurrentDay = @InEventDay -- update the day
-- invalid row (only has IN event)
Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, RemarkTimeIn, TerminalIPTimeIn,
TerminalSNTimeIn, PivotValue, LogDate )
values( @InEventUserID, @InEventDay, @InEventTimestamp, @InEventRemark, @InEventTerminalIP,
@InEventTerminalSN, @PivotValue, DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
End
else if @CurrTrigger = 'out'
Begin
Insert into @tempInOutDetailReport( UserID, LogDay, TimeIN, TimeOUT, TotalTime, RemarkTimeIn,
RemarkTimeOut, TerminalIPTimeIn, TerminalIPTimeOut, TerminalSNTimeIn, TerminalSNTimeOut, PivotValue,
LogDate)
values( @LogUserID, @CurrentDay, @FirstInEvent, @LogTimestamp, @CurrTotalHours,
@FirstInRemark, @LogRemark, @FirstInTerminalIP, @LogTerminalIP, @FirstInTerminalSN, @LogTerminalSN, @PivotValue,
DATEADD(HOUR, 0, DATEDIFF(DAY, 0, @InEventTimestamp)))
End
Close LogCursor
Deallocate LogCursor
Select *
From @tempInOutDetailReport tempTable
Left Join access_user on access_user.User_ID = tempTable.UserID
Order By tempTable.UserID, LogDate
End
If @@TranCount > @TransactionCountOnEntry
Begin
If @ErrorCode = 0
COMMIT TRANSACTION
Else
ROLLBACK TRANSACTION
End
return @ErrorCode
you will get the "java SQL Code" by right click on stored procedure in your database. something like this :
DECLARE @RC int
DECLARE @pUserID nvarchar(50)
DECLARE @pDepartmentID int
DECLARE @pStartDateTime datetime
DECLARE @pEndDateTime datetime
-- TODO: Set parameter values here.
EXECUTE @RC = [AccessManager].[dbo].[SP_GenerateInOutDetailReportSimple]
@pUserID,@pDepartmentID,@pStartDateTime,@pEndDateTime
GO
check the query String I've done, that is your homework ;) so sorry answering this long, this is my first answer since I register few weeks ago to get answer.
Close NetBeans before deleting the cache.
Cache is located in C:\Users\<username>\AppData\Local\NetBeans\Cache\
.
Clear the cache using the %USERPROFILE%
Windows variable:
del /s /q %USERPROFILE%\AppData\Local\NetBeans\Cache\
If it is set, you can also use the environment variable %LOCALAPPDATA%
:
del /s /q %LOCALAPPDATA%\NetBeans\Cache\
Cache is at: ~/.cache/netbeans/${netbeans_version}/index/
Cache is at: ~/Library/Caches/NetBeans/${netbeans_version}/
See also http://wiki.netbeans.org/FaqWhatIsUserdir.
On Windows, selecting the Help » About menu will display a dialog that contains the following text:
Product Version: NetBeans IDE 8.0.2 (Build 201411181905)
Java: 1.7.0_80; Java HotSpot(TM) 64-Bit Server VM 24.80-b11
Runtime: Java(TM) SE Runtime Environment 1.7.0_80-b15
System: Windows 7 version 6.1 running on amd64; Cp1252; en_CA (nb)
User directory: C:\Users\Username\AppData\Roaming\NetBeans\8.0.2
Cache directory: C:\Users\Username\AppData\Local\NetBeans\Cache\8.0.2
Regardless of operating system, the About dialog will contain the correct path to the cache directory.
There is a simple way available in Kotlin
var lst: List<Object> = ...
listOFStrings: ArrayList<String> = (lst!!.map { it.name })
SELECT pid, cid, pname, name1, name2
FROM customer1 c1, product p
WHERE p.cid=c1.cid
UNION SELECT pid, cid, pname, name1, name2
FROM customer2 c2, product p
WHERE p.cid=c2.cid;
Most of the answers suggest the Array.prototype.indexOf
method, the only problem is that it will not work on any IE version before IE9.
As an alternative I leave you two more options that will work on all browsers:
if (/Foo|Bar|Baz/.test(str)) {
// ...
}
if (str.match("Foo|Bar|Baz")) {
// ...
}
You may try using services of angular js, it has worked for me..giving the code snippets below
Controller code:
$scope.total = 0;
var aCart = new CartService();
$scope.addItemToCart = function (product) {
aCart.addCartTotal(product.Price);
};
$scope.showCart = function () {
$scope.total = aCart.getCartTotal();
};
Service Code:
app.service("CartService", function () {
Total = [];
Total.length = 0;
return function () {
this.addCartTotal = function (inTotal) {
Total.push( inTotal);
}
this.getCartTotal = function () {
var sum = 0;
for (var i = 0; i < Total.length; i++) {
sum += parseInt(Total[i], 10);
}
return sum;
}
};
});
You had thead
in your selector, but there is no thead
in your table. Also you had your selectors backwards. As you mentioned above, you wanted to be adding the tr
class to the th
, not vice-versa (although your comment seems to contradict what you wrote up above).
$('tr th').each(function(index){ if($('tr td').eq(index).attr('class') != ''){ // get the class of the td var tdClass = $('tr td').eq(index).attr('class'); // add it to this th $(this).addClass(tdClass ); } });
The main reason for primary and foreign keys is to enforce data consistency.
A primary key enforces the consistency of uniqueness of values over one or more columns. If an ID column has a primary key then it is impossible to have two rows with the same ID value. Without that primary key, many rows could have the same ID value and you wouldn't be able to distinguish between them based on the ID value alone.
A foreign key enforces the consistency of data that points elsewhere. It ensures that the data which is pointed to actually exists. In a typical parent-child relationship, a foreign key ensures that every child always points at a parent and that the parent actually exists. Without the foreign key you could have "orphaned" children that point at a parent that doesn't exist.
You do
printf ("Hi %s,</br />", $name);
before setting the cookies, which isn't allowed. You can't send any output before the headers, not even a blank line.
You are asking a lot of questions that you could answer yourself by reading the documentation, so I'll give you a general advice: read it and experiment in the python shell. You'll see that itemgetter
returns a callable:
>>> func = operator.itemgetter(1)
>>> func(a)
['Paul', 22, 'Car Dealer']
>>> func(a[0])
8
To do it in a different way, you can use lambda
:
a.sort(key=lambda x: x[1])
And reverse it:
a.sort(key=operator.itemgetter(1), reverse=True)
Sort by more than one column:
a.sort(key=operator.itemgetter(1,2))
See the sorting How To.
Here is a way to do it with a closure:
def generate_add_mult_func(func):
def function_generator(x):
return reduce(func,range(1,x))
return function_generator
def add(x,y):
return x+y
def mult(x,y):
return x*y
adding=generate_add_mult_func(add)
multiplying=generate_add_mult_func(mult)
print adding(10)
print multiplying(10)
UPDATE totals
SET total = total + 1
WHERE name = 'bill';
If you want to make sure the current value is indeed 203 (and not accidently increase it again) you can also add another condition:
UPDATE totals
SET total = total + 1
WHERE name = 'bill'
AND total = 203;
as mentioned in the responses above, it could be related to AWS security groups, and other things. but if you created a user and gave it remote access '%' and still getting this error, check your mysql config file, on debian, you can find it here: /etc/mysql/my.cnf and find the line:
bind-address = 127.0.0.1
and change it to:
bind-address = 0.0.0.0
and restart mysql.
on debian/ubuntu:
/etc/init.d/mysql restart
I hope this works for you.
According to IE 9 – What’s Changed? on the HttpWatch blog, IE9 still has a 2 connection limit when over VPN.
Using a VPN Still Clobbers IE 9 Performance
We previously reported about the scaling back of the maximum number of concurrent connections in IE 8 when your PC uses a VPN connection. This happened even if the browser traffic didn’t go over that connection.
Unfortunately, IE 9 is affected by VPN connections in the same way:
Implement the Comparable interface on your customObject.
/^(?:(?:\(?(?:00|\+)([1-4]\d\d|[1-9]\d?)\)?)?[\-\.\ \\\/]?)?((?:\(?\d{1,}\)?[\-\.\ \\\/]?){0,})(?:[\-\.\ \\\/]?(?:#|ext\.?|extension|x)[\-\.\ \\\/]?(\d+))?$/i
This matches:
- (+351) 282 43 50 50
- 90191919908
- 555-8909
- 001 6867684
- 001 6867684x1
- 1 (234) 567-8901
- 1-234-567-8901 x1234
- 1-234-567-8901 ext1234
- 1-234 567.89/01 ext.1234
- 1(234)5678901x1234
- (123)8575973
- (0055)(123)8575973
On $n, it saves:
You can test it on https://www.regexpal.com/?fam=99127
You should avoid having two checkboxes with the same name if you plan to reference them like document.FC.c1
. If you have multiple checkboxes named c1
how will the browser know which you are referring to?
Here's a non-jQuery solution to check if any checkboxes on the page are checked.
var checkboxes = document.querySelectorAll('input[type="checkbox"]');
var checkedOne = Array.prototype.slice.call(checkboxes).some(x => x.checked);
You need the Array.prototype.slice.call
part to convert the NodeList
returned by document.querySelectorAll
into an array that you can call some
on.
The window.navigator.platform property is not spoofed when the userAgent string is changed. I tested on my Mac if I change the userAgent to iPhone or Chrome Windows, navigator.platform remains MacIntel.
The property is also read-only
I could came up with the following table
Mac Computers
Mac68K
Macintosh 68K system.
MacPPC
Macintosh PowerPC system.
MacIntel
Macintosh Intel system.iOS Devices
iPhone
iPhone.
iPod
iPod Touch.
iPad
iPad.
Modern macs returns navigator.platform == "MacIntel"
but to give some "future proof" don't use exact matching, hopefully they will change to something like MacARM
or MacQuantum
in future.
var isMac = navigator.platform.toUpperCase().indexOf('MAC')>=0;
To include iOS that also use the "left side"
var isMacLike = /(Mac|iPhone|iPod|iPad)/i.test(navigator.platform);
var isIOS = /(iPhone|iPod|iPad)/i.test(navigator.platform);
var is_OSX = /(Mac|iPhone|iPod|iPad)/i.test(navigator.platform);_x000D_
var is_iOS = /(iPhone|iPod|iPad)/i.test(navigator.platform);_x000D_
_x000D_
var is_Mac = navigator.platform.toUpperCase().indexOf('MAC') >= 0;_x000D_
var is_iPhone = navigator.platform == "iPhone";_x000D_
var is_iPod = navigator.platform == "iPod";_x000D_
var is_iPad = navigator.platform == "iPad";_x000D_
_x000D_
/* Output */_x000D_
var out = document.getElementById('out');_x000D_
if (!is_OSX) out.innerHTML += "This NOT a Mac or an iOS Device!";_x000D_
if (is_Mac) out.innerHTML += "This is a Mac Computer!\n";_x000D_
if (is_iOS) out.innerHTML += "You're using an iOS Device!\n";_x000D_
if (is_iPhone) out.innerHTML += "This is an iPhone!";_x000D_
if (is_iPod) out.innerHTML += "This is an iPod Touch!";_x000D_
if (is_iPad) out.innerHTML += "This is an iPad!";_x000D_
out.innerHTML += "\nPlatform: " + navigator.platform;
_x000D_
<pre id="out"></pre>
_x000D_
Since most O.S. use the close button on the right, you can just move the close button to the left when the user is on a MacLike O.S., otherwise isn't a problem if you put it on the most common side, the right.
setTimeout(test, 1000); //delay for demonstration_x000D_
_x000D_
function test() {_x000D_
_x000D_
var mac = /(Mac|iPhone|iPod|iPad)/i.test(navigator.platform);_x000D_
_x000D_
if (mac) {_x000D_
document.getElementById('close').classList.add("left");_x000D_
}_x000D_
}
_x000D_
#window {_x000D_
position: absolute;_x000D_
margin: 1em;_x000D_
width: 300px;_x000D_
padding: 10px;_x000D_
border: 1px solid gray;_x000D_
background-color: #DDD;_x000D_
text-align: center;_x000D_
box-shadow: 0px 1px 3px #000;_x000D_
}_x000D_
#close {_x000D_
position: absolute;_x000D_
top: 0px;_x000D_
right: 0px;_x000D_
width: 22px;_x000D_
height: 22px;_x000D_
margin: -12px;_x000D_
box-shadow: 0px 1px 3px #000;_x000D_
background-color: #000;_x000D_
border: 2px solid #FFF;_x000D_
border-radius: 22px;_x000D_
color: #FFF;_x000D_
text-align: center;_x000D_
font: 14px"Comic Sans MS", Monaco;_x000D_
}_x000D_
#close.left{_x000D_
left: 0px;_x000D_
}
_x000D_
<div id="window">_x000D_
<div id="close">x</div>_x000D_
<p>Hello!</p>_x000D_
<p>If the "close button" change to the left side</p>_x000D_
<p>you're on a Mac like system!</p>_x000D_
</div>
_x000D_
http://www.nczonline.net/blog/2007/12/17/don-t-forget-navigator-platform/
This one worked for me:
\d+[ ](?:[A-Za-z0-9.-]+[ ]?)+(?:Avenue|Lane|Road|Boulevard|Drive|Street|Ave|Dr|Rd|Blvd|Ln|St)\.?
The source: https://www.codeproject.com/Tips/989012/Validate-and-Find-Addresses-with-RegEx
solve it by following the below steps:
Changed
mongodb://localhost:27017/local
to
localhost/local
And the error gone
First, the enum methods shouldn't be in all caps. They are methods just like other methods, with the same naming convention.
Second, what you are doing is not the best possible way to set up your enum. Instead of using an array of values for the values, you should use separate variables for each value. You can then implement the constructor like you would any other class.
Here's how you should do it with all the suggestions above:
public enum States {
...
MASSACHUSETTS("Massachusetts", "MA", true),
MICHIGAN ("Michigan", "MI", false),
...; // all 50 of those
private final String full;
private final String abbr;
private final boolean originalColony;
private States(String full, String abbr, boolean originalColony) {
this.full = full;
this.abbr = abbr;
this.originalColony = originalColony;
}
public String getFullName() {
return full;
}
public String getAbbreviatedName() {
return abbr;
}
public boolean isOriginalColony(){
return originalColony;
}
}
This script will read lines from large file and write to new small files. Will duplicate the header of the first line (Header) to all child files
Dim strLine
lCounter = 1
fCounter = 1
cPosition = 1
MaxLine = 1000
splitAt = MaxLine
Dim fHeader
sFile = "inputFile.txt"
dFile = LEFT(sFile, (LEN(sFile)-4))& "_0" & fCounter & ".txt"
Set objFileToRead = CreateObject("Scripting.FileSystemObject").OpenTextFile(sFile,1)
Set objFileToWrite = CreateObject("Scripting.FileSystemObject").OpenTextFile(dFile,2,true)
do while not objFileToRead.AtEndOfStream
strLine = objFileToRead.ReadLine()
objFileToWrite.WriteLine(strLine)
If cPosition = 1 Then
fHeader = strLine
End If
If cPosition = splitAt Then
fCounter = fCounter + 1
splitAt = splitAt + MaxLine
objFileToWrite.Close
Set objFileToWrite = Nothing
If fCounter < 10 Then
dFile=LEFT(dFile, (LEN(dFile)-5))& fCounter & ".txt"
Set objFileToWrite = CreateObject("Scripting.FileSystemObject").OpenTextFile(dFile,2,true)
objFileToWrite.WriteLine(fHeader)
ElseIf fCounter <100 Or fCounter = 100 Then
dFile=LEFT(dFile, (LEN(dFile)-6))& fCounter & ".txt"
Set objFileToWrite = CreateObject("Scripting.FileSystemObject").OpenTextFile(dFile,2,true)
objFileToWrite.WriteLine(fHeader)
Else
dFile=LEFT(dFile, (LEN(dFile)-7)) & fCounter & ".txt"
Set objFileToWrite = CreateObject("Scripting.FileSystemObject").OpenTextFile(dFile,2,true)
objFileToWrite.WriteLine(fHeader)
End If
End If
lCounter=lCounter + 1
cPosition=cPosition + 1
Loop
objFileToWrite.Close
Set objFileToWrite = Nothing
objFileToRead.Close
Set objFileToRead = Nothing
This stuff comes from ES file explorer
Just go into this app > settings
Then there is an option that says logging floating window, you just need to disable that and you will get rid of this infernal bubble for good
You could also flip your if test:
foreach ( int number in numbers )
{
if ( number >= 0 )
{
//process number
}
}
currentDay = '2019-12-06';
currentDay = new Date(currentDay).add(Date.DAY, +1).format('Y-m-d');
I solved this problem by adding a serialization method to my model:
def toJSON(self):
import simplejson
return simplejson.dumps(dict([(attr, getattr(self, attr)) for attr in [f.name for f in self._meta.fields]]))
Here's the verbose equivalent for those averse to one-liners:
def toJSON(self):
fields = []
for field in self._meta.fields:
fields.append(field.name)
d = {}
for attr in fields:
d[attr] = getattr(self, attr)
import simplejson
return simplejson.dumps(d)
_meta.fields
is an ordered list of model fields which can be accessed from instances and from the model itself.
I think you can find it usefull
http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-two-datasources
It shows how to define multiple datasources & assign one of them as primary.
Here is a rather full example, also contains distributes transactions - if you need it.
What you need is to create 2 configuration classes, separate the model/repository packages etc to make the config easy.
Also, in above example, it creates the data sources manually. You can avoid this using the method on spring doc, with @ConfigurationProperties annotation. Here is an example of this:
http://xantorohara.blogspot.com.tr/2013/11/spring-boot-jdbc-with-multiple.html
Hope these helps.
Rather than change the type of tag, you should be changing the style of the tag (or rather, the tag with a specific id.) Its not a good practice to be changing the elements of your document to apply stylistic changes. Try this:
$('a.change').click(function() {
$('p#changed').css("font-weight", "bold");
});
<p id="changed">Hello!</p>
<a id="change">change</a>
Found out how to make it work!
I just moved apache2 (the Web Sharing folder) to my desktop.
go to terminal and type "mv /etc/apache2/ /Users/hseungun/Desktop"
actually it says you need authority so
type this "sudo -s" then it'll go to bash-3.2
passwd root
set your password and then "mv /etc/apache2/ /Users/hseungun/Desktop"
try turning on the web sharing, and then start xampp on mac
Found these docu on the google docu pages:
In your example, you would get (if you picked the 3rd row) "C3:O3", cause C --> O is 12 columns
edit
Using the example on the docu:
// The code below will get the number of columns for the range C2:G8
// in the active spreadsheet, which happens to be "4"
var count = SpreadsheetApp.getActiveSheet().getRange(2, 3, 6, 4).getNumColumns(); Browser.msgBox(count);
The values between brackets:
2: the starting row = 2
3: the starting col = C
6: the number of rows = 6 so from 2 to 8
4: the number of cols = 4 so from C to G
So you come to the range: C2:G8
In programming terms, it's the larger surrounding part which can have any influence on the behaviour of the current unit of work. E.g. the running environment used, the environment variables, instance variables, local variables, state of other classes, state of the current environment, etcetera.
In some API's you see this name back in an interface/class, e.g. Servlet's ServletContext
, JSF's FacesContext
, Spring's ApplicationContext
, Android's Context
, JNDI's InitialContext
, etc. They all often follow the Facade Pattern which abstracts the environmental details the enduser doesn't need to know about away in a single interface/class.
The multiplier is changed at every coordinate because of the great circle distance theory as written here :
http://en.wikipedia.org/wiki/Great-circle_distance
and you can calculate the nearest value using this formula described here:
http://en.wikipedia.org/wiki/Great-circle_distance#Worked_example
the key is converting each degree - minute - second value to all degree value:
N 36°7.2', W 86°40.2' N = (+) , W = (-), S = (-), E = (+)
referencing the Greenwich meridian and Equator parallel
(phi) 36.12° = 36° + 7.2'/60'
(lambda) -86.67° = 86° + 40.2'/60'
I have just modified following line in users
and password_resets
migration file.
Old : $table->string('email')->unique();
New : $table->string('email', 128)->unique();
You are using cache system.. you can modify the original file and clear cache to show updates
I think the problem is that you want one result for each p.id?
But you are getting "duplicate" results for some p.id's, is that right?
The DISTINCT keyword applies to the entire result set, so applies to pl.nm, pl.val, pl.txt_val, not just p.id.
You need something like
SELECT TOP 10 p.id, max( p1.nm ), max (p1.val), ...
FROM ...
GROUP BY p.id
Won't need the distinct keyword then.
What you want is %.2f
, not 2%f
.
Also, you might want to replace your %d
with a %f
;)
#include <cstdio>
int main()
{
printf("When this number: %f is assigned to 2 dp, it will be: %.2f ", 94.9456, 94.9456);
return 0;
}
This will output:
When this number: 94.945600 is assigned to 2 dp, it will be: 94.95
See here for a full description of the printf formatting options: printf
In case you are targeting Win32 platforms or NT kernel mode, you should have a look at cfix.
Look @JRuns answer in here. The idea is to create html with your custom scheme and upload it somewhere. Then if you click on your custom link on your html-file, you will be redirected to your app. I used this article for android. But dont forget to set full name Name = "MyApp.Mobile.Droid.MainActivity"
attribute to your target activity.
When you call $("#my_input").val();
it returns as string variable. So use parseFloat
and parseInt
for converting.
When you use parseFloat
your desktop or phone ITSELF understands the meaning of variable.
And plus you can convert a float to string by using toFixed
which has an argument the count of digits as below:
var i = 0.011;
var ss = i.toFixed(2); //It returns 0.01
Make it
float b= 3.6f;
A floating-point literal is of type float if it is suffixed with an ASCII letter F or f; otherwise its type is double and it can optionally be suffixed with an ASCII letter D or d
Use iloc to access by position (rather than label):
In [11]: df = pd.DataFrame([[1, 2], [3, 4]], ['a', 'b'], ['A', 'B'])
In [12]: df
Out[12]:
A B
a 1 2
b 3 4
In [13]: df.iloc[0] # first row in a DataFrame
Out[13]:
A 1
B 2
Name: a, dtype: int64
In [14]: df['A'].iloc[0] # first item in a Series (Column)
Out[14]: 1
The above answer is useful but in graphs, it is difficult to distinguish between darker gradients of black. One alternative I found is to use gradients of gray colors as follows
palette(gray.colors(10, 0.9, 0.4))
plot(rep(1,10),col=1:10,pch=19,cex=3))
More info on gray scale here.
Added
When I used the code above for different colours like blue and black, the gradients were not that clear.
heat.colors()
seems more useful.
This document has more detailed information and options. pdf
Use .prop()
instead and if we go with your code then compare like this:
Look at the example jsbin:
$("#news_list tr").click(function () {
var ele = $(this).find(':checkbox');
if ($(':checked').length) {
ele.prop('checked', false);
$(this).removeClass('admin_checked');
} else {
ele.prop('checked', true);
$(this).addClass('admin_checked');
}
});
Changes:
input
to :checkbox
.the length
of the checked checkboxes
.Maybe you need this:
SELECT DATEADD(DD, 1 - DATEPART(DW, GETDATE()), GETDATE())
Or
DECLARE @MYDATE DATETIME
SET @MYDATE = '2011-08-23'
SELECT DATEADD(DD, 1 - DATEPART(DW, @MYDATE), @MYDATE)
Function
CREATE FUNCTION [dbo].[GetFirstDayOfWeek]
( @pInputDate DATETIME )
RETURNS DATETIME
BEGIN
SET @pInputDate = CONVERT(VARCHAR(10), @pInputDate, 111)
RETURN DATEADD(DD, 1 - DATEPART(DW, @pInputDate),
@pInputDate)
END
GO
Thus, to summarise the existing discussion, adding
cex.lab=1.5, cex.axis=1.5, cex.main=1.5, cex.sub=1.5
to your plot, where 1.5 could be 2, 3, etc. and a value of 1 is the default will increase the font size.
x <- rnorm(100)
hist(x, xlim=range(x),
xlab= "Variable Lable", ylab="density", main="Title of plot", prob=TRUE)
hist(x, xlim=range(x),
xlab= "Variable Lable", ylab="density", main="Title of plot", prob=TRUE,
cex=1.5)
hist(x, xlim=range(x),
xlab= "Variable Lable", ylab="density", main="Title of plot", prob=TRUE,
cex.lab=1.5, cex.axis=1.5, cex.main=1.5, cex.sub=1.5)
Final working solution:
public static String getFileName(Uri uri) {
try {
String path = uri.getLastPathSegment();
return path != null ? path.substring(path.lastIndexOf("/") + 1) : "unknown";
} catch (Exception e) {
e.printStackTrace();
}
return "unknown";
}
Simple solution:
int str_case_ins_cmp(const char* a, const char* b) {
int rc;
while (1) {
rc = tolower((unsigned char)*a) - tolower((unsigned char)*b);
if (rc || !*a) {
break;
}
++a;
++b;
}
return rc;
}
To view the files just browse them from the command prompt (cmd
), eg.:
c:\>cd \Windows\assembly\GAC_32
c:\Windows\assembly\GAC_32> dir
To add and remove files from the GAC use the tool gacutil
I would assume it's possible to place a proxy between the DB and your app then observe the communication. I'm not familiar with what software you would use to do this.
javascript:history.go(-1);
was used in the older browser.IE6. For other browser compatibility try
window.history.go(-1);
where -1 represent the number of pages you want to go back (-1,-2...etc) and
return false
is required to prevent default event.
For example :
<a href="#" onclick="window.history.go(-1); return false;"> Link </a>
4 possible ways, use one you need.
val ll = findViewById<LinearLayout>(R.id.your_layout_id)
ll.setBackgroundColor(ContextCompat.getColor(this, R.color.white))
<LinearLayout
android:background="@{@color/white}"
OR more useful statement-
<LinearLayout
android:background="@{model.colorResId}"
<LinearLayout
android:background="#FFFFFF"
<LinearLayout
android:background="@color/white"
LinearLayout ll = (LinearLayout) findViewById(R.id.your_layout_id);
ll.setBackgroundColor(ContextCompat.getColor(this, R.color.white));
hsb.s = max != 0 ? 255 * delta / max : 0;
?
is a ternary operator. It works like an if
in conjunction with the :
!=
means not equals
So, the long form of this line would be
if (max != 0) { //if max is not zero
hsb.s = 255 * delta / max;
} else {
hsb.s = 0;
}
That hex might need to be wrapped in quotes and made into a string. Javascript might not like the # character
Actually, it seems sufficient to merely attach a stock WebChromeClient to the client view, ala
mWebView.setWebChromeClient(new WebChromeClient());
and you need to have hardware acceleration turned on!
At least, if you don't need to play a full-screen video, there's no need to pull the VideoView out of the WebView and push it into the Activity's view. It will play in the video element's allotted rect.
Any ideas how to intercept the expand video button?
using dynamic variable in C# is the simplest.
Newtonsoft.Json.Linq has class JValue that can be used. Below is a sample code which displays Question id and text from the JSON string you have.
string jsonString = "[{\"Question\":{\"QuestionId\":49,\"QuestionText\":\"Whats your name?\",\"TypeId\":1,\"TypeName\":\"MCQ\",\"Model\":{\"options\":[{\"text\":\"Rahul\",\"selectedMarks\":\"0\"},{\"text\":\"Pratik\",\"selectedMarks\":\"9\"},{\"text\":\"Rohit\",\"selectedMarks\":\"0\"}],\"maxOptions\":10,\"minOptions\":0,\"isAnswerRequired\":true,\"selectedOption\":\"1\",\"answerText\":\"\",\"isRangeType\":false,\"from\":\"\",\"to\":\"\",\"mins\":\"02\",\"secs\":\"04\"}},\"CheckType\":\"\",\"S1\":\"\",\"S2\":\"\",\"S3\":\"\",\"S4\":\"\",\"S5\":\"\",\"S6\":\"\",\"S7\":\"\",\"S8\":\"\",\"S9\":\"Pratik\",\"S10\":\"\",\"ScoreIfNoMatch\":\"2\"},{\"Question\":{\"QuestionId\":51,\"QuestionText\":\"Are you smart?\",\"TypeId\":3,\"TypeName\":\"True-False\",\"Model\":{\"options\":[{\"text\":\"True\",\"selectedMarks\":\"7\"},{\"text\":\"False\",\"selectedMarks\":\"0\"}],\"maxOptions\":10,\"minOptions\":0,\"isAnswerRequired\":false,\"selectedOption\":\"3\",\"answerText\":\"\",\"isRangeType\":false,\"from\":\"\",\"to\":\"\",\"mins\":\"01\",\"secs\":\"04\"}},\"CheckType\":\"\",\"S1\":\"\",\"S2\":\"\",\"S3\":\"\",\"S4\":\"\",\"S5\":\"\",\"S6\":\"\",\"S7\":\"True\",\"S8\":\"\",\"S9\":\"\",\"S10\":\"\",\"ScoreIfNoMatch\":\"2\"}]";
dynamic myObject = JValue.Parse(jsonString);
foreach (dynamic questions in myObject)
{
Console.WriteLine(questions.Question.QuestionId + "." + questions.Question.QuestionText.ToString());
}
Console.Read();
You can use the BytesIO
class to get a wrapper around strings that behaves like a file. The BytesIO
object provides the same interface as a file, but saves the contents just in memory:
import io
with io.BytesIO() as output:
image.save(output, format="GIF")
contents = output.getvalue()
You have to explicitly specify the output format with the format
parameter, otherwise PIL will raise an error when trying to automatically detect it.
If you loaded the image from a file it has a format
parameter that contains the original file format, so in this case you can use format=image.format
.
In old Python 2 versions before introduction of the io
module you would have used the StringIO
module instead.
Here is solution for dynamic queries.
For example if you have more tables with different suffix:
dbo.SOMETHINGTABLE_ONE, dbo.SOMETHINGTABLE_TWO
Code:
DECLARE @INDEX AS NVARCHAR(20)
DECLARE @CheckVALUE AS NVARCHAR(max) = 'SELECT COUNT(SOMETHING) FROM
dbo.SOMETHINGTABLE_'+@INDEX+''
DECLARE @tempTable Table (TempVALUE int)
DECLARE @RESULTVAL INT
INSERT INTO @tempTable
EXEC sp_executesql @CheckVALUE
SET @RESULTVAL = (SELECT * FROM @tempTable)
DELETE @tempTable
SELECT @RESULTVAL
I had the same problem where I was receiving a null reference exception and the word "null" would be displayed in the editor. I tried a handful of solutions, including upgrading the editor to 3.4.1 to no avail.
I ended up having to edit the source. At about line 416 to 426 in _source\plugins\wysiwygarea\plugin.js, there's a snippet like this:
iframe = CKEDITOR.dom.element.createFromHtml( '<iframe' + ... + '></iframe>' );
In FF at least, the iframe isn't completely instantiated by the time it's needed. I surrounded the rest of the function after that line with a setTimeout function:
iframe = CKEDITOR.dom.element.createFromHtml( '<iframe' + ... + '></iframe>' );
setTimeout(function()
{
// Running inside of Firefox chrome the load event doesn't bubble like in a normal page (#5689)
...
}, 1000);
};
// The script that launches the bootstrap logic on 'domReady', so the document
...
The text renders consistently now in the modal dialogs.
You can't refer to a property of an object before you have initialized that object; use an external variable.
var key1 = "it";
var obj = {
key1 : key1,
key2 : key1 + " works!"
};
Also, this is not a "JSON object"; it is a Javascript object. JSON is a method of representing an object with a string (which happens to be valid Javascript code).
The jQuery UI sortable
feature includes a serialize
method to do this. It's quite simple, really. Here's a quick example that sends the data to the specified URL as soon as an element has changes position.
$('#element').sortable({
axis: 'y',
update: function (event, ui) {
var data = $(this).sortable('serialize');
// POST to server using $.post or $.ajax
$.ajax({
data: data,
type: 'POST',
url: '/your/url/here'
});
}
});
What this does is that it creates an array of the elements using the elements id
. So, I usually do something like this:
<ul id="sortable">
<li id="item-1"></li>
<li id="item-2"></li>
...
</ul>
When you use the serialize
option, it will create a POST query string like this: item[]=1&item[]=2
etc. So if you make use - for example - your database IDs in the id
attribute, you can then simply iterate through the POSTed array and update the elements' positions accordingly.
For example, in PHP:
$i = 0;
foreach ($_POST['item'] as $value) {
// Execute statement:
// UPDATE [Table] SET [Position] = $i WHERE [EntityId] = $value
$i++;
}
Addendum to @sjngm's answer:
They both also ignore whitespace:
var foo = " 3 "; console.log(parseInt(foo)); // 3 console.log(Number(foo)); // 3
It is not exactly correct. As sjngm wrote parseInt parses string to first number. It is true. But the problem is when you want to parse number separated with whitespace ie. "12 345". In that case parseInt("12 345")
will return 12
instead of 12345
.
So to avoid that situation you must trim whitespaces before parsing to number.
My solution would be:
var number=parseInt("12 345".replace(/\s+/g, ''),10);
Notice one extra thing I used in parseInt() function. parseInt("string",10)
will set the number to decimal format. If you would parse string like "08" you would get 0 because 8 is not a octal number.Explanation is here
npm prune [[<@scope>/]<pkg>...] [--production] [--dry-run] [--json]
This command removes "extraneous" packages. If a package name is provided, then only packages matching one of the supplied names are removed.
Extraneous packages are packages that are not listed on the parent package's dependencies list.
If the --production flag is specified or the NODE_ENV environment variable is set to production, this command will remove the packages specified in your devDependencies. Setting --no-production will negate NODE_ENV being set to production.
If the --dry-run flag is used then no changes will actually be made.
If the --json flag is used then the changes npm prune made (or would have made with --dry-run) are printed as a JSON object.
In normal operation with package-locks enabled, extraneous modules are pruned automatically when modules are installed and you'll only need this command with the --production flag.
If you've disabled package-locks then extraneous modules will not be removed and it's up to you to run npm prune from time-to-time to remove them.
npm dedupe
npm ddp
Searches the local package tree and attempts to simplify the overall structure by moving dependencies further up the tree, where they can be more effectively shared by multiple dependent packages.
For example, consider this dependency graph:
a
+-- b <-- depends on [email protected]
| `-- [email protected]
`-- d <-- depends on c@~1.0.9
`-- [email protected]
In this case, npm-dedupe will transform the tree to:
a
+-- b
+-- d
`-- [email protected]
Because of the hierarchical nature of node's module lookup, b and d will both get their dependency met by the single c package at the root level of the tree.
The deduplication algorithm walks the tree, moving each dependency as far up in the tree as possible, even if duplicates are not found. This will result in both a flat and deduplicated tree.
Small improvement to the code from ryyst (who got the most votes) is to not use dynamically allocated decoding table but rather static const precomputed table. This eliminates the use of pointer and initialization of the table, and also avoids memory leakage if one forgets to clean up the decoding table with base64_cleanup() (by the way, in base64_cleanup(), after calling free(decoding_table), one should have decoding_table=NULL, otherwise accidentally calling base64_decode after base64_cleanup() will crash or cause undetermined behavior). Another solution could be to use std::unique_ptr...but I'm satisfied with just having const char[256] on the stack and avoid using pointers alltogether - the code looks cleaner and shorter this way.
The decoding table is computed as follows:
const char encoding_table[] = {
'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',
'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',
'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v',
'w', 'x', 'y', 'z', '0', '1', '2', '3',
'4', '5', '6', '7', '8', '9', '+', '/' };
unsigned char decoding_table[256];
for (int i = 0; i < 256; i++)
decoding_table[i] = '\0';
for (int i = 0; i < 64; i++)
decoding_table[(unsigned char)encoding_table[i]] = i;
for (int i = 0; i < 256; i++)
cout << "0x" << (int(decoding_table[i]) < 16 ? "0" : "") << hex << int(decoding_table[i]) << (i != 255 ? "," : "") << ((i+1) % 16 == 0 ? '\n' : '\0');
cin.ignore();
and the modified code I am using is:
static const char encoding_table[] = {
'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',
'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',
'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v',
'w', 'x', 'y', 'z', '0', '1', '2', '3',
'4', '5', '6', '7', '8', '9', '+', '/' };
static const unsigned char decoding_table[256] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3e, 0x00, 0x00, 0x00, 0x3f,
0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e,
0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28,
0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
char* base64_encode(const unsigned char *data, size_t input_length, size_t &output_length) {
const int mod_table[] = { 0, 2, 1 };
output_length = 4 * ((input_length + 2) / 3);
char *encoded_data = (char*)malloc(output_length);
if (encoded_data == nullptr)
return nullptr;
for (int i = 0, j = 0; i < input_length;) {
uint32_t octet_a = i < input_length ? (unsigned char)data[i++] : 0;
uint32_t octet_b = i < input_length ? (unsigned char)data[i++] : 0;
uint32_t octet_c = i < input_length ? (unsigned char)data[i++] : 0;
uint32_t triple = (octet_a << 0x10) + (octet_b << 0x08) + octet_c;
encoded_data[j++] = encoding_table[(triple >> 3 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 2 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 1 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 0 * 6) & 0x3F];
}
for (int i = 0; i < mod_table[input_length % 3]; i++)
encoded_data[output_length - 1 - i] = '=';
return encoded_data;
};
unsigned char* base64_decode(const char *data, size_t input_length, size_t &output_length) {
if (input_length % 4 != 0)
return nullptr;
output_length = input_length / 4 * 3;
if (data[input_length - 1] == '=') (output_length)--;
if (data[input_length - 2] == '=') (output_length)--;
unsigned char* decoded_data = (unsigned char*)malloc(output_length);
if (decoded_data == nullptr)
return nullptr;
for (int i = 0, j = 0; i < input_length;) {
uint32_t sextet_a = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_b = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_c = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_d = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t triple = (sextet_a << 3 * 6)
+ (sextet_b << 2 * 6)
+ (sextet_c << 1 * 6)
+ (sextet_d << 0 * 6);
if (j < output_length) decoded_data[j++] = (triple >> 2 * 8) & 0xFF;
if (j < output_length) decoded_data[j++] = (triple >> 1 * 8) & 0xFF;
if (j < output_length) decoded_data[j++] = (triple >> 0 * 8) & 0xFF;
}
return decoded_data;
};
The simplest way is this:
while (cursor.moveToNext()) {
...
}
The cursor starts before the first result row, so on the first iteration this moves to the first result if it exists. If the cursor is empty, or the last row has already been processed, then the loop exits neatly.
Of course, don't forget to close the cursor once you're done with it, preferably in a finally
clause.
Cursor cursor = db.rawQuery(...);
try {
while (cursor.moveToNext()) {
...
}
} finally {
cursor.close();
}
If you target API 19+, you can use try-with-resources.
try (Cursor cursor = db.rawQuery(...)) {
while (cursor.moveToNext()) {
...
}
}
In my case event.stopPropagation();
was making my page refresh each time I pressed on a link so I had to find another solution.
So what I did was to catch the event on the parent and block the trigger if it was actually coming from his child using event.target
.
Here is the solution:
if (!angular.element($event.target).hasClass('some-unique-class-from-your-child')) ...
So basically your ng-click from your parent component works only if you clicked on the parent. If you clicked on the child it won't pass this condition and it won't continue it's flow.
Definitely Method A because its pooled and thread safe.
If you are using httpclient 4.x, the connection manager is called ThreadSafeClientConnManager. See this link for further details (scroll down to "Pooling connection manager"). For example:
HttpParams params = new BasicHttpParams();
SchemeRegistry registry = new SchemeRegistry();
registry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80));
ClientConnectionManager cm = new ThreadSafeClientConnManager(params, registry);
HttpClient client = new DefaultHttpClient(cm, params);
Using Moment library, see their website -> https://momentjs.com/timezone/docs/#/using-timezones/converting-to-zone/
i notice they also user their own library in their website, so you can have a try using the browser console before installing it
moment().tz(String);
The moment#tz mutator will change the time zone and update the offset.
moment("2013-11-18").tz("America/Toronto").format('Z'); // -05:00
moment("2013-11-18").tz("Europe/Berlin").format('Z'); // +01:00
This information is used consistently in other operations, like calculating the start of the day.
var m = moment.tz("2013-11-18 11:55", "America/Toronto");
m.format(); // 2013-11-18T11:55:00-05:00
m.startOf("day").format(); // 2013-11-18T00:00:00-05:00
m.tz("Europe/Berlin").format(); // 2013-11-18T06:00:00+01:00
m.startOf("day").format(); // 2013-11-18T00:00:00+01:00
Without an argument, moment#tz returns:
the time zone name assigned to the moment instance or
undefined if a time zone has not been set.
var m = moment.tz("2013-11-18 11:55", "America/Toronto");
m.tz(); // America/Toronto
var m = moment.tz("2013-11-18 11:55");
m.tz() === undefined; // true
It's worth noting that the QuerySet.values_list()
method doesn't actually return a list, but an object of type django.db.models.query.ValuesListQuerySet
, in order to maintain Django's goal of lazy evaluation, i.e. the DB query required to generate the 'list' isn't actually performed until the object is evaluated.
Somewhat irritatingly, though, this object has a custom __repr__
method which makes it look like a list when printed out, so it's not always obvious that the object isn't really a list.
The exception in the question is caused by the fact that custom objects cannot be serialized in JSON, so you'll have to convert it to a list first, with...
my_list = list(self.get_queryset().values_list('code', flat=True))
...then you can convert it to JSON with...
json_data = json.dumps(my_list)
You'll also have to place the resulting JSON data in an HttpResponse
object, which, apparently, should have a Content-Type
of application/json
, with...
response = HttpResponse(json_data, content_type='application/json')
...which you can then return from your function.
testvar[1] is the value of that array index, which is the number 2. Numbers don't have a length property, and you're checking for 2.length which is undefined. If you want the length of the array just check testvar.length
If the latitude coordinate is reported as -6.3572375290155 or -63.572375290155 in decimal degrees then you could round-off and store up to 6 decimal places for 10 cm (or 0.1 meters) precision.
The valid range of latitude in degrees is -90 and +90 for the southern and northern hemisphere respectively. Longitude is in the range -180 and +180 specifying coordinates west and east of the Prime Meridian, respectively.
For reference, the Equator has a latitude of 0°, the North pole has a latitude of 90° north (written 90° N or +90°), and the South pole has a latitude of -90°.
The Prime Meridian has a longitude of 0° that goes through Greenwich, England. The International Date Line (IDL) roughly follows the 180° longitude. A longitude with a positive value falls in the eastern hemisphere and the negative value falls in the western hemisphere.
Six (6) decimal places precision in coordinates using decimal degrees notation is at a 10 cm (or 0.1 meters) resolution. Each .000001 difference in coordinate decimal degree is approximately 10 cm in length. For example, the imagery of Google Earth and Google Maps is typically at the 1-meter resolution, and some places have a higher resolution of 1 inch per pixel. One meter resolution can be represented using 5 decimal places so more than 6 decimal places are extraneous for that resolution. The distance between longitudes at the equator is the same as latitude, but the distance between longitudes reaches zero at the poles as the lines of meridian converge at that point.
For millimeter (mm) precision then represent lat/lon with 8 decimal places in decimal degrees format. Since most applications don't need that level of precision 6 decimal places is sufficient for most cases.
In the other direction, whole decimal degrees represent a distance of ~111 km (or 60 nautical miles) and a 0.1 decimal degree difference represents a ~11 km distance.
Here is a table of # decimal places difference in latitude with the delta degrees and the estimated distance in meters using 0,0 as the starting point.
Decimal places | Decimal degrees | Distance (meters) | |
---|---|---|---|
1 | 0.10000000 | 11,057.43 | 11 km |
2 | 0.01000000 | 1,105.74 | 1 km |
3 | 0.00100000 | 110.57 | |
4 | 0.00010000 | 11.06 | |
5 | 0.00001000 | 1.11 | |
6 | 0.00000100 | 0.11 | 11 cm |
7 | 0.00000010 | 0.01 | 1 cm |
8 | 0.00000001 | 0.001 | 1 mm |
For DMS notation 1 arc second = 1/60/60 degree = ~30 meter length and 0.1 arc sec delta is ~3 meters.
Example:
0° 0' 0" W, 0° 0' 0" N
? 0° 0' 0" W, 0° 0' 1" N
? 30.715 meters0° 0' 0" W, 0° 0' 0" N
? 0° 0' 0" W, 0° 0' 0.1" N
? 3.0715 meters1 arc minute = 1/60 degree = ~2000m (2km)
Here is an amusing comic strip about coordinate precision.
From the horse's mouth: Explicit line joining
Two or more physical lines may be joined into logical lines using backslash characters (
\
), as follows: when a physical line ends in a backslash that is not part of a string literal or comment, it is joined with the following forming a single logical line, deleting the backslash and the following end-of-line character. For example:if 1900 < year < 2100 and 1 <= month <= 12 \ and 1 <= day <= 31 and 0 <= hour < 24 \ and 0 <= minute < 60 and 0 <= second < 60: # Looks like a valid date return 1
A line ending in a backslash cannot carry a comment. A backslash does not continue a comment. A backslash does not continue a token except for string literals (i.e., tokens other than string literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere on a line outside a string literal.
For Xcode 8 you gotta download a package named Additional Tools for Xcode 8
For other versions (8.1, 8.2) get the package here
Double click and open the dmg
and go to Hardware
directory. Double click on Network Link Conditioner.prefPane
.
Click on install
Now Network Link Conditioner will be available in System Preferences.
For versions older than Xcode 8, the package to be downloaded is called Hardware IO Tools for Xcode
. Get it from this page
In regards to Gordon M's answer above, the 1st and 2nd parameter in mysqli_real_escape_string ()
call should be swapped for the newer php versions,
according to: http://php.net/manual/en/mysqli.real-escape-string.php
another way to do is get the UriInfo instead of all the QueryParam
Then you will be able to get the queryParam as per needed in your code
@GET
@Path("/query")
public Response getUsers(@Context UriInfo info) {
String param_1 = info.getQueryParameters().getFirst("param_1");
String param_2 = info.getQueryParameters().getFirst("param_2");
return Response ;
}
The title is inside an <nyt_headline>
tag, which is nested inside an <h1>
tag and a <div>
tag with id "article".
soup.findAll('nyt_headline', limit=1)
Should work.
The article body is inside an <nyt_text>
tag, which is nested inside a <div>
tag with id "articleBody". Inside the <nyt_text>
element, the text itself is contained within <p>
tags. Images are not within those <p>
tags. It's difficult for me to experiment with the syntax, but I expect a working scrape to look something like this.
text = soup.findAll('nyt_text', limit=1)[0]
text.findAll('p')
If your key is a CHAR/VARCHAR or something of that type, another possible problem is different collation. Check if the charset is the same.
To allow the app using formatted strings from resources you should correct your xml. So, for example
<string name="app_name">Your App name, ver.%d</string>
should be replaced with
<string name="app_name">Your App name, ver.%1$d</string>
You can see this for details.
>>> A = array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
>>> A = A.transpose()
>>> A = A[1:].transpose()
Add this instead of the pre-defined class
class ViewController: UIViewController, UITextFieldDelegate {
To remove keyboard when clicked outside the keyboard
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
self.view.endEditing(true)
}
and to remove keyboard when pressed enter
add this line in viewDidLoad()
inputField is the name of the textField used.
self.inputField.delegate = self
and add this function
func textFieldShouldReturn(textField: UITextField) -> Bool {
textField.resignFirstResponder()
return true
}
Another example when the table name or database contains unsupported characters such as a space, or '-'
db='data-base'
db_d=''
db_d+='`'
db_d+=$db
db_d+='`'
myvariable=`mysql --user=$user --password=$password -e "SELECT A, B, C FROM $db_d.table_a;"`
If you want a uniformly distributed pseudorandom long in the range of [0,m
), try using the modulo operator and the absolute value method combined with the nextLong()
method as seen below:
Math.abs(rand.nextLong()) % m;
Where rand
is your Random object.
The modulo operator divides two numbers and outputs the remainder of those numbers. For example, 3 % 2
is 1
because the remainder of 3 and 2 is 1.
Since nextLong()
generates a uniformly distributed pseudorandom long in the range of [-(2^48),2^48) (or somewhere in that range), you will need to take the absolute value of it. If you don't, the modulo of the nextLong()
method has a 50% chance of returning a negative value, which is out of the range [0,m
).
What you initially requested was a uniformly distributed pseudorandom long in the range of [0,100). The following code does so:
Math.abs(rand.nextLong()) % 100;
ExtendOffice.com offers a VBA solution that worked for me in Excel 2016. Here's my description of the steps. I included additional details to make it easier. I also modified the VBA code slightly. If this doesn't work for you, retry the steps or check out the instructions on the ExtendOffice page.
Add data validation to a cell (or range of cells). Allow = List. Source = [the range with the values you want for the auto-complete / drop-down]. Click OK. You should now have a drop-down but with a weak auto-complete feature.
With a cell containing your newly added data validation, insert an ActiveX combo box (NOT a form control combo box). This is done from the Developer ribbon. If you don't have the Developer ribbon you will need to add it from the Excel options menu.
From the Developer tab in the Controls section, click "Design Mode". Select the combo box you just inserted. Then in the same ribbon section click "Properties". In the Properties window, change the name of the combo box to "TempComboBox".
Press ALT + F11 to go to the Visual Basic Editor. On the left-hand side, double click the worksheet with your data validation to open the code for that sheet. Copy and paste the following code onto the sheet. NOTE: I modified the code slightly so that it works even with Option Explicit
enabled at the top of the sheet.
Option Explicit
Private Sub Worksheet_SelectionChange(ByVal target As Range)
'Update by Extendoffice: 2018/9/21
' Update by Chris Brackett 2018-11-30
Dim xWs As Worksheet
Set xWs = Application.ActiveSheet
On Error Resume Next
Dim xCombox As OLEObject
Set xCombox = xWs.OLEObjects("TempCombo")
' Added this to auto select all text when activating the combox box.
xCombox.SetFocus
With xCombox
.ListFillRange = vbNullString
.LinkedCell = vbNullString
.Visible = False
End With
Dim xStr As String
Dim xArr
If target.Validation.Type = xlValidateList Then
' The target cell contains Data Validation.
target.Validation.InCellDropdown = False
' Cancel the "SelectionChange" event.
Dim Cancel As Boolean
Cancel = True
xStr = target.Validation.Formula1
xStr = Right(xStr, Len(xStr) - 1)
If xStr = vbNullString Then Exit Sub
With xCombox
.Visible = True
.Left = target.Left
.Top = target.Top
.Width = target.Width + 5
.Height = target.Height + 5
.ListFillRange = xStr
If .ListFillRange = vbNullString Then
xArr = Split(xStr, ",")
Me.TempCombo.List = xArr
End If
.LinkedCell = target.Address
End With
xCombox.Activate
Me.TempCombo.DropDown
End If
End Sub
Private Sub TempCombo_KeyDown( _
ByVal KeyCode As MSForms.ReturnInteger, _
ByVal Shift As Integer)
Select Case KeyCode
Case 9 ' Tab key
Application.ActiveCell.Offset(0, 1).Activate
Case 13 ' Pause key
Application.ActiveCell.Offset(1, 0).Activate
End Select
End Sub
Make sure the the "Microsoft Forms 2.0 Object Library" is referenced. In the Visual Basic Editor, go to Tools > References, check the box next to that library (if not already checked) and click OK. To verify that it worked, go to Debug > Compile VBA Project.
Finally, save your project and click in a cell with the data validation you added. You should see a combo box with a drop-down list of suggestions that updates with each letter you type.
You don't need the separate fill item. In fact, it's invalid. You just have to add a solid
block to the shape
. The subsequent stroke
draws on top of the solid
:
<shape
xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle">
<corners android:radius="5dp" />
<solid android:color="@android:color/white" />
<stroke
android:width="1dip"
android:color="@color/bggrey" />
</shape>
You also don't need the layer-list
if you only have one shape
.
If your published app has no admob and its settings in Google Developer Console (called "Pricing and Distribution") "CONTAINS ADS" is uncheched. Always develop with test id with logcat output.
In general, multiple inserts will be slower because of the connection overhead. Doing multiple inserts at once will reduce the cost of overhead per insert.
Depending on which language you are using, you can possibly create a batch in your programming/scripting language before going to the db and add each insert to the batch. Then you would be able to execute a large batch using one connect operation. Here's an example in Java.
In your 'head' section, add this code:
<style>
input[type='text'] { font-size: 24px; }
</style>
Or you can only add the:
input[type='text'] { font-size: 24px; }
to a CSS file which can later be included.
You can also change the font face by using the CSS property: font-family
font-family: monospace;
So you can have a CSS code like this:
input[type='text'] { font-size: 24px; font-family: monospace; }
You can find further help at the W3Schools website.
I suggest you to have a look at the CSS3 specification. With CSS3 you can also load a font from the web instead of having the limitation to use only the most common fonts or tell the user to download the font you're using.
Define a class to store your data first
public class YourDataClass {
private String messageType;
private Timestamp timestamp;
private int count;
private int version;
// your get/setters
...........
}
And then initialize your map:
Map<Integer, YourDataClass> map = new HashMap<Integer, YourDataClass>();
On a Mac you can also pinch-zoom, i.e. move your thumb and index finger together or apart.
You can pass data to the view using the with method.
return view('greeting', ['name' => 'James']);
Try this -
<style>
table { table-layout: fixed; }
table th, table td { overflow: hidden; }
</style>
pls try this,
$('#select_id option[value="'+value+'"]').attr("disabled", true);
In Go, there is a general rule that syntax should not hide complex/costly operations. Converting a string
to an interface{}
is done in O(1) time. Converting a []string
to an interface{}
is also done in O(1) time since a slice is still one value. However, converting a []string
to an []interface{}
is O(n) time because each element of the slice must be converted to an interface{}
.
The one exception to this rule is converting strings. When converting a string
to and from a []byte
or a []rune
, Go does O(n) work even though conversions are "syntax".
There is no standard library function that will do this conversion for you. You could make one with reflect, but it would be slower than the three line option.
Example with reflection:
func InterfaceSlice(slice interface{}) []interface{} {
s := reflect.ValueOf(slice)
if s.Kind() != reflect.Slice {
panic("InterfaceSlice() given a non-slice type")
}
// Keep the distinction between nil and empty slice input
if s.IsNil() {
return nil
}
ret := make([]interface{}, s.Len())
for i:=0; i<s.Len(); i++ {
ret[i] = s.Index(i).Interface()
}
return ret
}
Your best option though is just to use the lines of code you gave in your question:
b := make([]interface{}, len(a))
for i := range a {
b[i] = a[i]
}
You cannot do so - the browser will not allow this because of security concerns. Although there are workarounds, the fact is that you shouldn't count on this working. The following Stack Overflow questions are relevant here:
In addition to these, the new HTML5 specification states that browsers will need to feed a Windows compatible fakepath into the input type="file"
field, ostensibly for backward compatibility reasons.
So trying to obtain the path is worse then useless in newer browsers - you'll actually get a fake one instead.
First, what you need to understand the relationships between components. Then you can choose the right method of communication. I will try to explain all the methods that I know and use in my practice for communication between components.
What kinds of relationships between components can there be?
1. Parent > Child
Sharing Data via Input
This is probably the most common method of sharing data. It works by using the @Input()
decorator to allow data to be passed via the template.
parent.component.ts
import { Component } from '@angular/core';
@Component({
selector: 'parent-component',
template: `
<child-component [childProperty]="parentProperty"></child-component>
`,
styleUrls: ['./parent.component.css']
})
export class ParentComponent{
parentProperty = "I come from parent"
constructor() { }
}
child.component.ts
import { Component, Input } from '@angular/core';
@Component({
selector: 'child-component',
template: `
Hi {{ childProperty }}
`,
styleUrls: ['./child.component.css']
})
export class ChildComponent {
@Input() childProperty: string;
constructor() { }
}
This is a very simple method. It is easy to use. We can also catch changes to the data in the child component using ngOnChanges.
But do not forget that if we use an object as data and change the parameters of this object, the reference to it will not change. Therefore, if we want to receive a modified object in a child component, it must be immutable.
2. Child > Parent
Sharing Data via ViewChild
ViewChild allows one component to be injected into another, giving the parent access to its attributes and functions. One caveat, however, is that child
won’t be available until after the view has been initialized. This means we need to implement the AfterViewInit lifecycle hook to receive the data from the child.
parent.component.ts
import { Component, ViewChild, AfterViewInit } from '@angular/core';
import { ChildComponent } from "../child/child.component";
@Component({
selector: 'parent-component',
template: `
Message: {{ message }}
<child-compnent></child-compnent>
`,
styleUrls: ['./parent.component.css']
})
export class ParentComponent implements AfterViewInit {
@ViewChild(ChildComponent) child;
constructor() { }
message:string;
ngAfterViewInit() {
this.message = this.child.message
}
}
child.component.ts
import { Component} from '@angular/core';
@Component({
selector: 'child-component',
template: `
`,
styleUrls: ['./child.component.css']
})
export class ChildComponent {
message = 'Hello!';
constructor() { }
}
Sharing Data via Output() and EventEmitter
Another way to share data is to emit data from the child, which can be listed by the parent. This approach is ideal when you want to share data changes that occur on things like button clicks, form entries, and other user events.
parent.component.ts
import { Component } from '@angular/core';
@Component({
selector: 'parent-component',
template: `
Message: {{message}}
<child-component (messageEvent)="receiveMessage($event)"></child-component>
`,
styleUrls: ['./parent.component.css']
})
export class ParentComponent {
constructor() { }
message:string;
receiveMessage($event) {
this.message = $event
}
}
child.component.ts
import { Component, Output, EventEmitter } from '@angular/core';
@Component({
selector: 'child-component',
template: `
<button (click)="sendMessage()">Send Message</button>
`,
styleUrls: ['./child.component.css']
})
export class ChildComponent {
message: string = "Hello!"
@Output() messageEvent = new EventEmitter<string>();
constructor() { }
sendMessage() {
this.messageEvent.emit(this.message)
}
}
3. Siblings
Child > Parent > Child
I try to explain other ways to communicate between siblings below. But you could already understand one of the ways of understanding the above methods.
parent.component.ts
import { Component } from '@angular/core';
@Component({
selector: 'parent-component',
template: `
Message: {{message}}
<child-one-component (messageEvent)="receiveMessage($event)"></child1-component>
<child-two-component [childMessage]="message"></child2-component>
`,
styleUrls: ['./parent.component.css']
})
export class ParentComponent {
constructor() { }
message: string;
receiveMessage($event) {
this.message = $event
}
}
child-one.component.ts
import { Component, Output, EventEmitter } from '@angular/core';
@Component({
selector: 'child-one-component',
template: `
<button (click)="sendMessage()">Send Message</button>
`,
styleUrls: ['./child-one.component.css']
})
export class ChildOneComponent {
message: string = "Hello!"
@Output() messageEvent = new EventEmitter<string>();
constructor() { }
sendMessage() {
this.messageEvent.emit(this.message)
}
}
child-two.component.ts
import { Component, Input } from '@angular/core';
@Component({
selector: 'child-two-component',
template: `
{{ message }}
`,
styleUrls: ['./child-two.component.css']
})
export class ChildTwoComponent {
@Input() childMessage: string;
constructor() { }
}
4. Unrelated Components
All the methods that I have described below can be used for all the above options for the relationship between the components. But each has its own advantages and disadvantages.
Sharing Data with a Service
When passing data between components that lack a direct connection, such as siblings, grandchildren, etc, you should be using a shared service. When you have data that should always be in sync, I find the RxJS BehaviorSubject very useful in this situation.
data.service.ts
import { Injectable } from '@angular/core';
import { BehaviorSubject } from 'rxjs';
@Injectable()
export class DataService {
private messageSource = new BehaviorSubject('default message');
currentMessage = this.messageSource.asObservable();
constructor() { }
changeMessage(message: string) {
this.messageSource.next(message)
}
}
first.component.ts
import { Component, OnInit } from '@angular/core';
import { DataService } from "../data.service";
@Component({
selector: 'first-componennt',
template: `
{{message}}
`,
styleUrls: ['./first.component.css']
})
export class FirstComponent implements OnInit {
message:string;
constructor(private data: DataService) {
// The approach in Angular 6 is to declare in constructor
this.data.currentMessage.subscribe(message => this.message = message);
}
ngOnInit() {
this.data.currentMessage.subscribe(message => this.message = message)
}
}
second.component.ts
import { Component, OnInit } from '@angular/core';
import { DataService } from "../data.service";
@Component({
selector: 'second-component',
template: `
{{message}}
<button (click)="newMessage()">New Message</button>
`,
styleUrls: ['./second.component.css']
})
export class SecondComponent implements OnInit {
message:string;
constructor(private data: DataService) { }
ngOnInit() {
this.data.currentMessage.subscribe(message => this.message = message)
}
newMessage() {
this.data.changeMessage("Hello from Second Component")
}
}
Sharing Data with a Route
Sometimes you need not only pass simple data between component but save some state of the page. For example, we want to save some filter in the online market and then copy this link and send to a friend. And we expect it to open the page in the same state as us. The first, and probably the quickest, way to do this would be to use query parameters.
Query parameters look more along the lines of /people?id=
where id
can equal anything and you can have as many parameters as you want. The query parameters would be separated by the ampersand character.
When working with query parameters, you don’t need to define them in your routes file, and they can be named parameters. For example, take the following code:
page1.component.ts
import {Component} from "@angular/core";
import {Router, NavigationExtras} from "@angular/router";
@Component({
selector: "page1",
template: `
<button (click)="onTap()">Navigate to page2</button>
`,
})
export class Page1Component {
public constructor(private router: Router) { }
public onTap() {
let navigationExtras: NavigationExtras = {
queryParams: {
"firstname": "Nic",
"lastname": "Raboy"
}
};
this.router.navigate(["page2"], navigationExtras);
}
}
In the receiving page, you would receive these query parameters like the following:
page2.component.ts
import {Component} from "@angular/core";
import {ActivatedRoute} from "@angular/router";
@Component({
selector: "page2",
template: `
<span>{{firstname}}</span>
<span>{{lastname}}</span>
`,
})
export class Page2Component {
firstname: string;
lastname: string;
public constructor(private route: ActivatedRoute) {
this.route.queryParams.subscribe(params => {
this.firstname = params["firstname"];
this.lastname = params["lastname"];
});
}
}
NgRx
The last way, which is more complicated but more powerful, is to use NgRx. This library is not for data sharing; it is a powerful state management library. I can't in a short example explain how to use it, but you can go to the official site and read the documentation about it.
To me, NgRx Store solves multiple issues. For example, when you have to deal with observables and when responsibility for some observable data is shared between different components, the store actions and reducer ensure that data modifications will always be performed "the right way".
It also provides a reliable solution for HTTP requests caching. You will be able to store the requests and their responses so that you can verify that the request you're making does not have a stored response yet.
You can read about NgRx and understand whether you need it in your app or not:
Finally, I want to say that before choosing some of the methods for sharing data you need to understand how this data will be used in the future. I mean maybe just now you can use just an @Input
decorator for sharing a username and surname. Then you will add a new component or new module (for example, an admin panel) which needs more information about the user. This means that may be a better way to use a service for user data or some other way to share data. You need to think about it more before you start implementing data sharing.
You can create a C# like extension/helper method by (RE) implementing the Collections interface and adding- example for Java Collection:
public class RockCollection<T extends Comparable<T>> implements Collection<T> {
private Collection<T> _list = new ArrayList<T>();
//###########Custom extension methods###########
public T doSomething() {
//do some stuff
return _list
}
//proper examples
public T find(Predicate<T> predicate) {
return _list.stream()
.filter(predicate)
.findFirst()
.get();
}
public List<T> findAll(Predicate<T> predicate) {
return _list.stream()
.filter(predicate)
.collect(Collectors.<T>toList());
}
public String join(String joiner) {
StringBuilder aggregate = new StringBuilder("");
_list.forEach( item ->
aggregate.append(item.toString() + joiner)
);
return aggregate.toString().substring(0, aggregate.length() - 1);
}
public List<T> reverse() {
List<T> listToReverse = (List<T>)_list;
Collections.reverse(listToReverse);
return listToReverse;
}
public List<T> sort(Comparator<T> sortComparer) {
List<T> listToReverse = (List<T>)_list;
Collections.sort(listToReverse, sortComparer);
return listToReverse;
}
public int sum() {
List<T> list = (List<T>)_list;
int total = 0;
for (T aList : list) {
total += Integer.parseInt(aList.toString());
}
return total;
}
public List<T> minus(RockCollection<T> listToMinus) {
List<T> list = (List<T>)_list;
int total = 0;
listToMinus.forEach(list::remove);
return list;
}
public Double average() {
List<T> list = (List<T>)_list;
Double total = 0.0;
for (T aList : list) {
total += Double.parseDouble(aList.toString());
}
return total / list.size();
}
public T first() {
return _list.stream().findFirst().get();
//.collect(Collectors.<T>toList());
}
public T last() {
List<T> list = (List<T>)_list;
return list.get(_list.size() - 1);
}
//##############################################
//Re-implement existing methods
@Override
public int size() {
return _list.size();
}
@Override
public boolean isEmpty() {
return _list == null || _list.size() == 0;
}
There's a C implementation here:
http://www.cs.rit.edu/~ncs/color/t_convert.html
Should be very straightforward to convert to C#, as almost no functions are called - just calculations.
found via Google
Setting \itemindent
for a new itemize environment solves the problem:
\newenvironment{beameritemize}
{ \begin{itemize}
\setlength{\itemsep}{1.5ex}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\addtolength{\itemindent}{-2em} }
{ \end{itemize} }
System.Management.Automation on Nuget
System.Management.Automation.dll on NuGet, newer package from 2015, not unlisted as the previous one!
Microsoft PowerShell team packages un NuGet
Update: package is now owned by PowerShell Team. Huzzah!
I would recommend using jQuery with this function:
$(document).ready(function(){
$('#about').addClass('expand');
});
This will add the expand class to an element with id of about when the dom is ready on page load.
Unit Testing is one of the most adopted methodologies for high quality code. Its contribution to a more stable, independent and documented code is well proven . Unit test code is considered and handled as an a integral part of your repository, and as such requires development and maintenance. However, developers often encounter a situation where the resources invested in unit tests where not as fruitful as one would expect. In an ideal world every method we code will have a series of tests covering it’s code and validating it’s correctness. However, usually due to time limitations we either skip some tests or write poor quality ones. In such reality, while keeping in mind the amount of resources invested in unit testing development and maintenance, one must ask himself, given the available time, which code deserve testing the most? And from the existing tests, which tests are actually worth keeping and maintaining? See here
Actually it's java limitation caping it at 2^30-4 being 1073741820. Not 2^31-1. Dunno why but i tested it manually on jdk. 2^30-3 still throwing vm except
Edit: fixed -1 to -4, checked on windows jvm
$_SERVER['HTTP_REFERER'];
But if you run a file (that contains the above code) by directly hitting the URL in the browser then you get the following error.
Notice: Undefined index: HTTP_REFERER
I encountered the No module named psycopg2.extensions
error when trying to run pip2 install psycopg2
on a Mac running Mavericks (10.9). I don't think my stack trace included a message about gcc, and it also included a hint:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I looked for the pg_config
file in my Postgres install and added the folder containing it to my path: /Applications/Postgres.app/Contents/Versions/9.4/bin
. Your path may be different, especially if you have a different version of Postgres installed - I would just poke around until you find the bin/
folder. After doing this, the installation worked.
<?php
$time = '21:32:32';
$seconds = 0;
$parts = explode(':', $time);
if (count($parts) > 2) {
$seconds += $parts[0] * 3600;
}
$seconds += $parts[1] * 60;
$seconds += $parts[2];
To get the post by George Howarth working properly with more than one replacement you need to remove the break, assign the output to a variable ($line) and then output the variable:
$lookupTable = @{
'something1' = 'something1aa'
'something2' = 'something2bb'
'something3' = 'something3cc'
'something4' = 'something4dd'
'something5' = 'something5dsf'
'something6' = 'something6dfsfds'
}
$original_file = 'path\filename.abc'
$destination_file = 'path\filename.abc.new'
Get-Content -Path $original_file | ForEach-Object {
$line = $_
$lookupTable.GetEnumerator() | ForEach-Object {
if ($line -match $_.Key)
{
$line = $line -replace $_.Key, $_.Value
}
}
$line
} | Set-Content -Path $destination_file
It is because Gerrit is configured to require Change-Id in the commit messages.
http://gerrit.googlecode.com/svn-history/r6114/documentation/2.1.7/error-missing-changeid.html
You have to change the messages of every commit that you are pushing to include the change id ( using git filter-branch
) and only then push.
The code overlooks calling the function move_uploaded_file() which would check whether the indicated file is valid for uploading.
You may wish to review a simple example at:
For Swift 3
Usage
class LoginTVC: UITableViewController {
var loadingView : LoadingView!
override func viewDidLoad() {
super.viewDidLoad()
// CASE 1: To Show loadingView on load
loadingView = LoadingView(uiView: view, message: "Sending you verification code")
}
// CASE 2: To show loadingView on click of a button
@IBAction func showLoadingView(_ sender: UIButton) {
if let loaderView = loadingView{ // If loadingView already exists
if loaderView.isHidden() {
loaderView.show() // To show activity indicator
}
}
else{
loadingView = LoadingView(uiView: view, message: "Sending you verification code")
}
}
}
// CASE 3: To hide LoadingView on click of a button
@IBAction func hideLoadingView(_ sender: UIButton) {
if let loaderView = loadingView{ // If loadingView already exists
self.loadingView.hide()
}
}
}
LoadingView Class
class LoadingView {
let uiView : UIView
let message : String
let messageLabel = UILabel()
let loadingSV = UIStackView()
let loadingView = UIView()
let activityIndicator: UIActivityIndicatorView = UIActivityIndicatorView(activityIndicatorStyle: UIActivityIndicatorViewStyle.gray)
init(uiView: UIView, message: String) {
self.uiView = uiView
self.message = message
self.setup()
}
func setup(){
let viewWidth = uiView.bounds.width
let viewHeight = uiView.bounds.height
// Configuring the message label
messageLabel.text = message
messageLabel.textColor = UIColor.darkGray
messageLabel.textAlignment = .center
messageLabel.numberOfLines = 3
messageLabel.lineBreakMode = .byWordWrapping
// Creating stackView to center and align Label and Activity Indicator
loadingSV.axis = .vertical
loadingSV.distribution = .equalSpacing
loadingSV.alignment = .center
loadingSV.addArrangedSubview(activityIndicator)
loadingSV.addArrangedSubview(messageLabel)
// Creating loadingView, this acts as a background for label and activityIndicator
loadingView.frame = uiView.frame
loadingView.center = uiView.center
loadingView.backgroundColor = UIColor.darkGray.withAlphaComponent(0.3)
loadingView.clipsToBounds = true
// Disabling auto constraints
loadingSV.translatesAutoresizingMaskIntoConstraints = false
// Adding subviews
loadingView.addSubview(loadingSV)
uiView.addSubview(loadingView)
activityIndicator.startAnimating()
// Views dictionary
let views = [
"loadingSV": loadingSV
]
// Constraints for loadingSV
uiView.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|-[loadingSV(300)]-|", options: [], metrics: nil, views: views))
uiView.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-\(viewHeight / 3)-[loadingSV(50)]-|", options: [], metrics: nil, views: views))
}
// Call this method to hide loadingView
func show() {
loadingView.isHidden = false
}
// Call this method to show loadingView
func hide(){
loadingView.isHidden = true
}
// Call this method to check if loading view already exists
func isHidden() -> Bool{
if loadingView.isHidden == false{
return false
}
else{
return true
}
}
}
It is important to notice that in C# the char type is stored as Unicode UTF-16.
char c = (char)88;
or
char c = Convert.ToChar(88)
int asciiCode = (int)'A';
The literal must be ASCII equivalent. For example:
string str = "X?????????";
Console.WriteLine((int)str[0]);
Console.WriteLine((int)str[1]);
will print
X
3626
Extended ASCII ranges from 0 to 255.
Using the Symbol
char c = 'X';
Using the Unicode code
char c = '\u0058';
Using the Hexadecimal
char c = '\x0058';
if version < 8.4.0
pg_dump -D -t <table> <database>
Add -a
before the -t
if you only want the INSERTs, without the CREATE TABLE etc to set up the table in the first place.
version >= 8.4.0
pg_dump --column-inserts --data-only --table=<table> <database>
There are two ways you can do that
Start Your comment with "<!--"
then end your comment with "-->"
Example <!-- my comment goes here -->
Highlight the part you want to comment and press CTRL + SHIFT + /
How about if you're copying each column in a sheet to different sheets? Example: row B of mysheet to row B of sheet1, row C of mysheet to row B of sheet 2...
Look at node-ffi.
node-ffi is a Node.js addon for loading and calling dynamic libraries using pure JavaScript. It can be used to create bindings to native libraries without writing any C++ code.
// Convert BigDecimal number To String by using below method //
public static String RemoveTrailingZeros(BigDecimal tempDecimal)
{
tempDecimal = tempDecimal.stripTrailingZeros();
String tempString = tempDecimal.toPlainString();
return tempString;
}
// Recall RemoveTrailingZeros
BigDecimal output = new BigDecimal(0);
String str = RemoveTrailingZeros(output);
We need to define what is a web service before telling what are the difference between the SOAP and WSDL where the two (SOAP and WSDL) are components of a web service
Most applications are developed to interact with users, the user enters or searches for data through an interface and the application then responds to the user's input.
A Web service does more or less the same thing except that a Web service application communicates only from machine to machine or application to application. There is often no direct user interaction.
A Web service basically is a collection of open protocols that is used to exchange data between applications. The use of open protocols enables Web services to be platform independent. Software that are written in different programming languages and that run on different platforms can use Web services to exchange data over computer networks such as the Internet. In other words, Windows applications can talk to PHP, Java and Perl applications and many others, which in normal circumstances would not be possible.
How Do Web Services Work?
Because different applications are written in different programming languages, they often cannot communicate with each other. A Web service enables this communication by using a combination of open protocols and standards, chiefly XML, SOAP and WSDL. A Web service uses XML to tag data, SOAP to transfer a message and finally WSDL to describe the availability of services. Let's take a look at these three main components of a Web service application.
The Simple Object Access Protocol or SOAP is a protocol for sending and receiving messages between applications without confronting interoperability issues (interoperability meaning the platform that a Web service is running on becomes irrelevant). Another protocol that has a similar function is HTTP. It is used to access Web pages or to surf the Net. HTTP ensures that you do not have to worry about what kind of Web server -- whether Apache or IIS or any other -- serves you the pages you are viewing or whether the pages you view were created in ASP.NET or HTML.
Below is an example of a SOAP request and response message
SOAP Request:
POST /InStock HTTP/1.1
Host: www.bookshop.org
Content-Type: application/soap+xml; charset=utf-8
Content-Length: nnn
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Body xmlns:m="http://www.bookshop.org/prices">
<m:GetBookPrice>
<m:BookName>The Fleamarket</m:BookName>
</m:GetBookPrice>
</soap:Body>
</soap:Envelope>
SOAP Response:
POST /InStock HTTP/1.1
Host: www.bookshop.org
Content-Type: application/soap+xml; charset=utf-8
Content-Length: nnn
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Body xmlns:m="http://www.bookshop.org/prices">
<m:GetBookPriceResponse>
<m: Price>10.95</m: Price>
</m:GetBookPriceResponse>
</soap:Body>
</soap:Envelope>
Although both messages look the same, they carry out different methods. For instance looking at the above examples you can see that the requesting message uses the GetBookPrice
method to get the book price. The response is carried out by the GetBookPriceResponse
method, which is going to be the message that you as the "requestor" will see. You can also see that the messages are composed using XML.
WSDL is a document that describes a Web service and also tells you how to access and use its methods.
Take a look at a sample WSDL file:
<?xml version="1.0" encoding="UTF-8"?>
<definitions name ="DayOfWeek"
targetNamespace="http://www.roguewave.com/soapworx/examples/DayOfWeek.wsdl"
xmlns:tns="http://www.roguewave.com/soapworx/examples/DayOfWeek.wsdl"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.xmlsoap.org/wsdl/">
<message name="DayOfWeekInput">
<part name="date" type="xsd:date"/>
</message>
<message name="DayOfWeekResponse">
<part name="dayOfWeek" type="xsd:string"/>
</message>
<portType name="DayOfWeekPortType">
<operation name="GetDayOfWeek">
<input message="tns:DayOfWeekInput"/>
<output message="tns:DayOfWeekResponse"/>
</operation>
</portType>
<binding name="DayOfWeekBinding" type="tns:DayOfWeekPortType">
<soap:binding style="document"
transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="GetDayOfWeek">
<soap:operation soapAction="getdayofweek"/>
<input>
<soap:body use="encoded"
namespace="http://www.roguewave.com/soapworx/examples"
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body use="encoded"
namespace="http://www.roguewave.com/soapworx/examples"
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
</binding>
<service name="DayOfWeekService" >
<documentation>
Returns the day-of-week name for a given date
</documentation>
<port name="DayOfWeekPort" binding="tns:DayOfWeekBinding">
<soap:address location="http://localhost:8090/dayofweek/DayOfWeek"/>
</port>
</service>
</definitions>
The main things to remember about a WSDL file are that it provides you with:
use Request.Form
to get the Data
Controller:
[HttpPost]
public ActionResult Index(int? id)
{
string jsonData= Request.Form[0]; // The data from the POST
}
I write this to try
View:
<input type="button" value="post" id="btnPost" />
<script type="text/javascript">
$(function () {
var test = {
number: 456,
name: "Ryu"
}
$("#btnPost").click(function () {
$.post('@Url.Action("Index", "Home")', JSON.stringify(test));
});
});
</script>
and write Request.Form[0]
or Request.Params[0]
in controller can get the data.
I don't write <form> tag
in view.
DELETE FROM on_search WHERE search_date < NOW() - INTERVAL N DAY
Replace N with your day count
On Visual Studio Community 7.5.3 on Mac this works for me:
Ctrl + Space
I kept having this problem because windows was setting my node_modules
folder to Readonly. Make sure you uncheck this.
Those answers aren't bad, but they are limited in that they don't actually allow you to enter data. We had a similar problem where we were using barcode readers to enter data into a field, but we wanted to suppress the keyboard.
This is what I put together, it works pretty well:
https://codepen.io/bobjase/pen/QrQQvd/
<!-- must be a select box with no children to suppress the keyboard -->
input: <select id="hiddenField" />
<span id="fakecursor" />
<input type="text" readonly="readonly" id="visibleField" />
<div id="cursorMeasuringDiv" />
#hiddenField {
height:17px;
width:1px;
position:absolute;
margin-left:3px;
margin-top:2px;
border:none;
border-width:0px 0px 0px 1px;
}
#cursorMeasuringDiv {
position:absolute;
visibility:hidden;
margin:0px;
padding:0px;
}
#hiddenField:focus {
border:1px solid gray;
border-width:0px 0px 0px 1px;
outline:none;
animation-name: cursor;
animation-duration: 1s;
animation-iteration-count: infinite;
}
@keyframes cursor {
from {opacity:0;}
to {opacity:1;}
}
// whenever the visible field gets focused
$("#visibleField").bind("focus", function(e) {
// silently shift the focus to the hidden select box
$("#hiddenField").focus();
$("#cursorMeasuringDiv").css("font", $("#visibleField").css("font"));
});
// whenever the user types on his keyboard in the select box
// which is natively supported for jumping to an <option>
$("#hiddenField").bind("keypress",function(e) {
// get the current value of the readonly field
var currentValue = $("#visibleField").val();
// and append the key the user pressed into that field
$("#visibleField").val(currentValue + e.key);
$("#cursorMeasuringDiv").text(currentValue + e.key);
// measure the width of the cursor offset
var offset = 3;
var textWidth = $("#cursorMeasuringDiv").width();
$("#hiddenField").css("marginLeft",Math.min(offset+textWidth,$("#visibleField").width()));
});
When you click in the <input>
box, it simulates a cursor in that box but really puts the focus on an empty <select>
box. Select boxes naturally allow for keypresses to support jumping to an element in the list so it was only a matter of rerouting the keypress to the original input and offsetting the simulated cursor.
This won't work for backspace, delete, etc... but we didn't need those. You could probably use jQuery's trigger to send the keyboard event directly to another input box somewhere but we didn't need to bother with that so I didn't do it.
Simply removing @RequestBody
annotation solves the problem (tested on Spring Boot 2):
@RestController
public class MyController {
@PostMapping
public void method(@Valid RequestDto dto) {
// method body ...
}
}
This code is ill-formed:
int&const icr=i;
Reference: C++17 [dcl.ref]/1:
Cv-qualified references are ill-formed except when the cv-qualifiers are introduced through the use of a typedef-name or decltype-specifier, in which case the cv-qualifiers are ignored.
This rule has been present in all standardized versions of C++. Because the code is ill-formed:
The compiler should reject the program; and if it doesn't, the executable's behaviour is completely undefined.
NB: Not sure how none of the other answers mentioned this yet... nobody's got access to a compiler?
'Date' is your index so you want to do,
print (df.index.min())
print (df.index.max())
2014-03-13 00:00:00
2014-03-31 00:00:00
In recruitment terms, having been on both sides of the fence, definitely put HTML under 'programming languages', or perhaps more safely under 'technologies'
Yes, we all know that it is a Markup Language and not a Programming Language. but a) Recruitment Agencies don't know and don't care, and b) employers don't know and don't care. Really.
And pointing out their ignorance will only serve you ill. And the techies who eventually see your CV will be grateful for a candidate who has heard of HTML, and won't worry about the taxonomy.
Honestly, it isn't an issue.
If you are using the background image for the rounded corners then I would rather increase the padding style of the main div to give enough room for the rounded corners of the background image to be visible.
Try increasing the padding of the main div style:
#mainWrapperDivWithBGImage
{
background: url("myImageWithRoundedCorners.jpg") no-repeat scroll 0 0 transparent;
height: 248px;
margin: 0;
overflow: hidden;
padding: 10px 10px;
width: 996px;
}
P.S: I assume the rounded corners have a radius of 10px.
Look at getpass module
import getpass
getpass.getuser()
'kostya'
Availability: Unix, Windows
p.s. Per comment below "this function looks at the values of various environment variables to determine the user name. Therefore, this function should not be relied on for access control purposes (or possibly any other purpose, since it allows any user to impersonate any other)."
puts 'abcdefg'.start_with?('abc') #=> true
[edit] This is something I didn't know before this question: start_with
takes multiple arguments.
'abcdefg'.start_with?( 'xyz', 'opq', 'ab')
Another useful property of the cross product is that its magnitude is related to the sine of the angle between the two vectors:
| a x b | = |a| . |b| . sine(theta)
or
sine(theta) = | a x b | / (|a| . |b|)
So, in implementation 1 above, if a
and b
are known in advance to be unit vectors then the result of that function is exactly that sine() value.
Check out this article. It is very exhaustive and you will find how to extract various hardware information.
Quote from the article:
To get hardware information, you need to create an object of ManagementObjectSearcher class.
using System.Management;
ManagementObjectSearcher searcher = new ManagementObjectSearcher("select * from " + Key);
foreach (ManagementObject share in searcher.Get()) {
// Some Codes ...
}
The Key on the code above, is a variable that is replaced with appropriate data. For example, to get the information of the CPU, you have to replace the Key with Win32_Processor.
Delete vendor folder and run composer install command. It is working 100%
Since C++11, you can also use a lambda expression instead of defining a comparator struct:
auto comp = [](const string& a, const string& b) { return a.length() < b.length(); };
map<string, string, decltype(comp)> my_map(comp);
my_map["1"] = "a";
my_map["three"] = "b";
my_map["two"] = "c";
my_map["fouuur"] = "d";
for(auto const &kv : my_map)
cout << kv.first << endl;
Output:
1
two
three
fouuur
I'd like to repeat the final note of Georg's answer: When comparing by length you can only have one string of each length in the map as a key.
To do a border along one side of a select in IE use IE's filters:
select.required { border-left:2px solid red; filter: progid:DXImageTransform.Microsoft.dropshadow(OffX=-2, OffY=0,color=#FF0000) }
I put a border on one side only of all my inputs for required status.
There is probably an effects that do a better job for an all-round border ...
http://msdn.microsoft.com/en-us/library/ms532853(v=VS.85).aspx
This is because you have wrong selector. According to your markup, .ui-icon
and .ui-icon-circle-triangle-w"
should point to the same <span>
element. So you should use:
$(".ui-icon.ui-icon-circle-triangle-w").html("<<");
or
$(".ui-datepicker-prev .ui-icon").html("<<");
or
$(".ui-datepicker-prev span").html("<<");
FWIW, here's the dictionary thing. After setting a reference to MS Scripting. You can jack around with the array size of avInput to match your needs.
Sub somemacro()
Dim avInput As Variant
Dim uvals As Dictionary
Dim i As Integer
Dim rop As Range
avInput = Sheets("data").UsedRange
Set uvals = New Dictionary
For i = 1 To UBound(avInput, 1)
If uvals.Exists(avInput(i, 1)) = False Then
uvals.Add avInput(i, 1), 1
Else
uvals.Item(avInput(i, 1)) = uvals.Item(avInput(i, 1)) + 1
End If
Next i
ReDim avInput(1 To uvals.Count)
i = 1
For Each kv In uvals.Keys
avInput(i) = kv
i = i + 1
Next kv
Set rop = Sheets("sheet2").Range("a1")
rop.Resize(UBound(avInput, 1), 1) = Application.Transpose(avInput)
End Sub
Xerces-based tools will emit the following error
The processing instruction target matching "[xX][mM][lL]" is not allowed.
when an XML declaration is encountered anywhere other than at the top of an XML file.
This is a valid diagnostic message; other XML parsers should issue a similar error message in this situation.
Some blank space or other visible content exists before the <?xml ?>
declaration.
Resolution: remove blank space or any other visible content before the XML declaration.
Some invisible content exists before the <?xml ?>
declaration. Most commonly this is a Byte Order Mark
(BOM).
Resolution: Remove the BOM using techniques such as those suggested by the W3C page on the BOM in HTML.
A stray <?xml ?>
declaration exists within the XML content.
This can happen when XML files are combined programmatically or
via cut-and-paste. There can only be one <?xml ?>
declaration
in an XML file, and it can only be at the top.
Resolution: Search for
<?xml
in a case-insensitive manner, and remove all but the top XML
declaration from the file.
I see users in comments wondering how to disable cell editing while allowing row deletion : I managed to do this by setting all columns individually to read only, instead of the DataGrid itself.
<DataGrid IsReadOnly="False">
<DataGrid.Columns>
<DataGridTextColumn IsReadOnly="True"/>
<DataGridTextColumn IsReadOnly="True"/>
</DataGrid.Columns>
</DataGrid>
A node is the base class for both elements and attributes (and basically all other XML representations too).